2025-06-02 16:51:21.916171 | Job console starting 2025-06-02 16:51:21.934190 | Updating git repos 2025-06-02 16:51:22.011471 | Cloning repos into workspace 2025-06-02 16:51:22.170234 | Restoring repo states 2025-06-02 16:51:22.190924 | Merging changes 2025-06-02 16:51:22.190947 | Checking out repos 2025-06-02 16:51:22.483914 | Preparing playbooks 2025-06-02 16:51:23.076632 | Running Ansible setup 2025-06-02 16:51:27.431194 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-06-02 16:51:28.217251 | 2025-06-02 16:51:28.217445 | PLAY [Base pre] 2025-06-02 16:51:28.235966 | 2025-06-02 16:51:28.236134 | TASK [Setup log path fact] 2025-06-02 16:51:28.272078 | orchestrator | ok 2025-06-02 16:51:28.291929 | 2025-06-02 16:51:28.292162 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-06-02 16:51:28.332219 | orchestrator | ok 2025-06-02 16:51:28.344309 | 2025-06-02 16:51:28.344429 | TASK [emit-job-header : Print job information] 2025-06-02 16:51:28.390079 | # Job Information 2025-06-02 16:51:28.390286 | Ansible Version: 2.16.14 2025-06-02 16:51:28.390322 | Job: testbed-deploy-stable-in-a-nutshell-ubuntu-24.04 2025-06-02 16:51:28.390356 | Pipeline: post 2025-06-02 16:51:28.390379 | Executor: 521e9411259a 2025-06-02 16:51:28.390400 | Triggered by: https://github.com/osism/testbed/commit/887b41f5cd4fd4903028405821376cedcc5ffa4a 2025-06-02 16:51:28.390423 | Event ID: cbb70308-3fd1-11f0-9e38-1687f67235b8 2025-06-02 16:51:28.397680 | 2025-06-02 16:51:28.397808 | LOOP [emit-job-header : Print node information] 2025-06-02 16:51:28.524878 | orchestrator | ok: 2025-06-02 16:51:28.525128 | orchestrator | # Node Information 2025-06-02 16:51:28.525249 | orchestrator | Inventory Hostname: orchestrator 2025-06-02 16:51:28.525285 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-06-02 16:51:28.525309 | orchestrator | Username: zuul-testbed02 2025-06-02 16:51:28.525330 | orchestrator | Distro: Debian 12.11 2025-06-02 16:51:28.525356 | orchestrator | Provider: static-testbed 2025-06-02 16:51:28.525377 | orchestrator | Region: 2025-06-02 16:51:28.525399 | orchestrator | Label: testbed-orchestrator 2025-06-02 16:51:28.525420 | orchestrator | Product Name: OpenStack Nova 2025-06-02 16:51:28.525439 | orchestrator | Interface IP: 81.163.193.140 2025-06-02 16:51:28.547474 | 2025-06-02 16:51:28.547622 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-06-02 16:51:29.079794 | orchestrator -> localhost | changed 2025-06-02 16:51:29.097695 | 2025-06-02 16:51:29.097938 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-06-02 16:51:30.160747 | orchestrator -> localhost | changed 2025-06-02 16:51:30.184927 | 2025-06-02 16:51:30.185078 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-06-02 16:51:30.486362 | orchestrator -> localhost | ok 2025-06-02 16:51:30.498020 | 2025-06-02 16:51:30.498190 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-06-02 16:51:30.535902 | orchestrator | ok 2025-06-02 16:51:30.558474 | orchestrator | included: /var/lib/zuul/builds/dd0960a543a64f20bce8e7355c8ec002/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-06-02 16:51:30.567087 | 2025-06-02 16:51:30.567214 | TASK [add-build-sshkey : Create Temp SSH key] 2025-06-02 16:51:31.443707 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-06-02 16:51:31.444282 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/dd0960a543a64f20bce8e7355c8ec002/work/dd0960a543a64f20bce8e7355c8ec002_id_rsa 2025-06-02 16:51:31.444399 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/dd0960a543a64f20bce8e7355c8ec002/work/dd0960a543a64f20bce8e7355c8ec002_id_rsa.pub 2025-06-02 16:51:31.444475 | orchestrator -> localhost | The key fingerprint is: 2025-06-02 16:51:31.444542 | orchestrator -> localhost | SHA256:iIWt1HCsPDafldyAhf3+llTseI5/VwH5gr8CJapDZc4 zuul-build-sshkey 2025-06-02 16:51:31.444605 | orchestrator -> localhost | The key's randomart image is: 2025-06-02 16:51:31.444687 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-06-02 16:51:31.444751 | orchestrator -> localhost | | ...=. . | 2025-06-02 16:51:31.444813 | orchestrator -> localhost | | *+ o o | 2025-06-02 16:51:31.444920 | orchestrator -> localhost | | .o.+. = ..o | 2025-06-02 16:51:31.444980 | orchestrator -> localhost | | .*+ ++.oo .oo | 2025-06-02 16:51:31.445037 | orchestrator -> localhost | | .o+*oS.o .+. .| 2025-06-02 16:51:31.445107 | orchestrator -> localhost | | .oE .. o.o .| 2025-06-02 16:51:31.445167 | orchestrator -> localhost | | . . .o =. .| 2025-06-02 16:51:31.445225 | orchestrator -> localhost | | o .=.. o| 2025-06-02 16:51:31.445286 | orchestrator -> localhost | | . ......| 2025-06-02 16:51:31.445345 | orchestrator -> localhost | +----[SHA256]-----+ 2025-06-02 16:51:31.445493 | orchestrator -> localhost | ok: Runtime: 0:00:00.349749 2025-06-02 16:51:31.460164 | 2025-06-02 16:51:31.460318 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-06-02 16:51:31.496675 | orchestrator | ok 2025-06-02 16:51:31.513654 | orchestrator | included: /var/lib/zuul/builds/dd0960a543a64f20bce8e7355c8ec002/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-06-02 16:51:31.531911 | 2025-06-02 16:51:31.532097 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-06-02 16:51:31.559293 | orchestrator | skipping: Conditional result was False 2025-06-02 16:51:31.573297 | 2025-06-02 16:51:31.573429 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-06-02 16:51:32.292129 | orchestrator | changed 2025-06-02 16:51:32.301879 | 2025-06-02 16:51:32.302026 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-06-02 16:51:32.591783 | orchestrator | ok 2025-06-02 16:51:32.600415 | 2025-06-02 16:51:32.600540 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-06-02 16:51:33.034732 | orchestrator | ok 2025-06-02 16:51:33.043235 | 2025-06-02 16:51:33.043377 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-06-02 16:51:33.463560 | orchestrator | ok 2025-06-02 16:51:33.471508 | 2025-06-02 16:51:33.471637 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-06-02 16:51:33.497956 | orchestrator | skipping: Conditional result was False 2025-06-02 16:51:33.515573 | 2025-06-02 16:51:33.515765 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-06-02 16:51:34.006124 | orchestrator -> localhost | changed 2025-06-02 16:51:34.025276 | 2025-06-02 16:51:34.025416 | TASK [add-build-sshkey : Add back temp key] 2025-06-02 16:51:34.415208 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/dd0960a543a64f20bce8e7355c8ec002/work/dd0960a543a64f20bce8e7355c8ec002_id_rsa (zuul-build-sshkey) 2025-06-02 16:51:34.415485 | orchestrator -> localhost | ok: Runtime: 0:00:00.020489 2025-06-02 16:51:34.423279 | 2025-06-02 16:51:34.423397 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-06-02 16:51:34.859182 | orchestrator | ok 2025-06-02 16:51:34.868193 | 2025-06-02 16:51:34.868350 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-06-02 16:51:34.892759 | orchestrator | skipping: Conditional result was False 2025-06-02 16:51:34.953116 | 2025-06-02 16:51:34.953260 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-06-02 16:51:35.367015 | orchestrator | ok 2025-06-02 16:51:35.381809 | 2025-06-02 16:51:35.381994 | TASK [validate-host : Define zuul_info_dir fact] 2025-06-02 16:51:35.425984 | orchestrator | ok 2025-06-02 16:51:35.435906 | 2025-06-02 16:51:35.436043 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-06-02 16:51:35.756063 | orchestrator -> localhost | ok 2025-06-02 16:51:35.771141 | 2025-06-02 16:51:35.771307 | TASK [validate-host : Collect information about the host] 2025-06-02 16:51:36.958621 | orchestrator | ok 2025-06-02 16:51:36.977191 | 2025-06-02 16:51:36.977689 | TASK [validate-host : Sanitize hostname] 2025-06-02 16:51:37.045626 | orchestrator | ok 2025-06-02 16:51:37.058705 | 2025-06-02 16:51:37.058961 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-06-02 16:51:37.692546 | orchestrator -> localhost | changed 2025-06-02 16:51:37.699369 | 2025-06-02 16:51:37.699488 | TASK [validate-host : Collect information about zuul worker] 2025-06-02 16:51:38.149964 | orchestrator | ok 2025-06-02 16:51:38.158584 | 2025-06-02 16:51:38.158738 | TASK [validate-host : Write out all zuul information for each host] 2025-06-02 16:51:38.741017 | orchestrator -> localhost | changed 2025-06-02 16:51:38.757997 | 2025-06-02 16:51:38.758131 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-06-02 16:51:39.057328 | orchestrator | ok 2025-06-02 16:51:39.065737 | 2025-06-02 16:51:39.065889 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-06-02 16:52:17.619322 | orchestrator | changed: 2025-06-02 16:52:17.619721 | orchestrator | .d..t...... src/ 2025-06-02 16:52:17.619772 | orchestrator | .d..t...... src/github.com/ 2025-06-02 16:52:17.619797 | orchestrator | .d..t...... src/github.com/osism/ 2025-06-02 16:52:17.619837 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-06-02 16:52:17.619875 | orchestrator | RedHat.yml 2025-06-02 16:52:17.643922 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-06-02 16:52:17.643939 | orchestrator | RedHat.yml 2025-06-02 16:52:17.643991 | orchestrator | = 1.53.0"... 2025-06-02 16:52:30.957372 | orchestrator | 16:52:30.957 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-06-02 16:52:31.040443 | orchestrator | 16:52:31.040 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-06-02 16:52:32.357146 | orchestrator | 16:52:32.351 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.1.0... 2025-06-02 16:52:34.239372 | orchestrator | 16:52:34.239 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.1.0 (signed, key ID 4F80527A391BEFD2) 2025-06-02 16:52:34.914820 | orchestrator | 16:52:34.914 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-06-02 16:52:35.775484 | orchestrator | 16:52:35.775 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-06-02 16:52:36.783654 | orchestrator | 16:52:36.782 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-06-02 16:52:37.630663 | orchestrator | 16:52:37.630 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-06-02 16:52:37.631965 | orchestrator | 16:52:37.630 STDOUT terraform: Providers are signed by their developers. 2025-06-02 16:52:37.631978 | orchestrator | 16:52:37.630 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-06-02 16:52:37.631983 | orchestrator | 16:52:37.630 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-06-02 16:52:37.631988 | orchestrator | 16:52:37.630 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-06-02 16:52:37.631996 | orchestrator | 16:52:37.630 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-06-02 16:52:37.632005 | orchestrator | 16:52:37.630 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-06-02 16:52:37.632010 | orchestrator | 16:52:37.630 STDOUT terraform: you run "tofu init" in the future. 2025-06-02 16:52:37.632014 | orchestrator | 16:52:37.631 STDOUT terraform: OpenTofu has been successfully initialized! 2025-06-02 16:52:37.632018 | orchestrator | 16:52:37.631 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-06-02 16:52:37.632022 | orchestrator | 16:52:37.631 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-06-02 16:52:37.632026 | orchestrator | 16:52:37.631 STDOUT terraform: should now work. 2025-06-02 16:52:37.632030 | orchestrator | 16:52:37.631 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-06-02 16:52:37.632034 | orchestrator | 16:52:37.631 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-06-02 16:52:37.632039 | orchestrator | 16:52:37.631 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-06-02 16:52:37.837956 | orchestrator | 16:52:37.837 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-06-02 16:52:38.084710 | orchestrator | 16:52:38.084 STDOUT terraform: Created and switched to workspace "ci"! 2025-06-02 16:52:38.084829 | orchestrator | 16:52:38.084 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-06-02 16:52:38.084839 | orchestrator | 16:52:38.084 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-06-02 16:52:38.084845 | orchestrator | 16:52:38.084 STDOUT terraform: for this configuration. 2025-06-02 16:52:38.298574 | orchestrator | 16:52:38.296 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-06-02 16:52:38.410378 | orchestrator | 16:52:38.410 STDOUT terraform: ci.auto.tfvars 2025-06-02 16:52:38.414175 | orchestrator | 16:52:38.414 STDOUT terraform: default_custom.tf 2025-06-02 16:52:38.596948 | orchestrator | 16:52:38.595 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-06-02 16:52:39.419866 | orchestrator | 16:52:39.419 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-06-02 16:52:39.980618 | orchestrator | 16:52:39.978 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-06-02 16:52:40.242159 | orchestrator | 16:52:40.241 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-06-02 16:52:40.242271 | orchestrator | 16:52:40.242 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-06-02 16:52:40.242279 | orchestrator | 16:52:40.242 STDOUT terraform:  + create 2025-06-02 16:52:40.242322 | orchestrator | 16:52:40.242 STDOUT terraform:  <= read (data resources) 2025-06-02 16:52:40.242408 | orchestrator | 16:52:40.242 STDOUT terraform: OpenTofu will perform the following actions: 2025-06-02 16:52:40.242634 | orchestrator | 16:52:40.242 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-06-02 16:52:40.242718 | orchestrator | 16:52:40.242 STDOUT terraform:  # (config refers to values not yet known) 2025-06-02 16:52:40.242805 | orchestrator | 16:52:40.242 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-06-02 16:52:40.242887 | orchestrator | 16:52:40.242 STDOUT terraform:  + checksum = (known after apply) 2025-06-02 16:52:40.242969 | orchestrator | 16:52:40.242 STDOUT terraform:  + created_at = (known after apply) 2025-06-02 16:52:40.243073 | orchestrator | 16:52:40.242 STDOUT terraform:  + file = (known after apply) 2025-06-02 16:52:40.243148 | orchestrator | 16:52:40.243 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.243244 | orchestrator | 16:52:40.243 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 16:52:40.243330 | orchestrator | 16:52:40.243 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-06-02 16:52:40.243412 | orchestrator | 16:52:40.243 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-06-02 16:52:40.243468 | orchestrator | 16:52:40.243 STDOUT terraform:  + most_recent = true 2025-06-02 16:52:40.243549 | orchestrator | 16:52:40.243 STDOUT terraform:  + name = (known after apply) 2025-06-02 16:52:40.243627 | orchestrator | 16:52:40.243 STDOUT terraform:  + protected = (known after apply) 2025-06-02 16:52:40.243719 | orchestrator | 16:52:40.243 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.243801 | orchestrator | 16:52:40.243 STDOUT terraform:  + schema = (known after apply) 2025-06-02 16:52:40.243919 | orchestrator | 16:52:40.243 STDOUT terraform:  + size_bytes = (known after apply) 2025-06-02 16:52:40.244037 | orchestrator | 16:52:40.243 STDOUT terraform:  + tags = (known after apply) 2025-06-02 16:52:40.244121 | orchestrator | 16:52:40.244 STDOUT terraform:  + updated_at = (known after apply) 2025-06-02 16:52:40.244171 | orchestrator | 16:52:40.244 STDOUT terraform:  } 2025-06-02 16:52:40.244355 | orchestrator | 16:52:40.244 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-06-02 16:52:40.244428 | orchestrator | 16:52:40.244 STDOUT terraform:  # (config refers to values not yet known) 2025-06-02 16:52:40.244536 | orchestrator | 16:52:40.244 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-06-02 16:52:40.244598 | orchestrator | 16:52:40.244 STDOUT terraform:  + checksum = (known after apply) 2025-06-02 16:52:40.244671 | orchestrator | 16:52:40.244 STDOUT terraform:  + created_at = (known after apply) 2025-06-02 16:52:40.244747 | orchestrator | 16:52:40.244 STDOUT terraform:  + file = (known after apply) 2025-06-02 16:52:40.244820 | orchestrator | 16:52:40.244 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.244896 | orchestrator | 16:52:40.244 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 16:52:40.244969 | orchestrator | 16:52:40.244 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-06-02 16:52:40.245042 | orchestrator | 16:52:40.244 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-06-02 16:52:40.245096 | orchestrator | 16:52:40.245 STDOUT terraform:  + most_recent = true 2025-06-02 16:52:40.245160 | orchestrator | 16:52:40.245 STDOUT terraform:  + name = (known after apply) 2025-06-02 16:52:40.245231 | orchestrator | 16:52:40.245 STDOUT terraform:  + protected = (known after apply) 2025-06-02 16:52:40.245343 | orchestrator | 16:52:40.245 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.245416 | orchestrator | 16:52:40.245 STDOUT terraform:  + schema = (known after apply) 2025-06-02 16:52:40.245500 | orchestrator | 16:52:40.245 STDOUT terraform:  + size_bytes = (known after apply) 2025-06-02 16:52:40.245567 | orchestrator | 16:52:40.245 STDOUT terraform:  + tags = (known after apply) 2025-06-02 16:52:40.245637 | orchestrator | 16:52:40.245 STDOUT terraform:  + updated_at = (known after apply) 2025-06-02 16:52:40.245704 | orchestrator | 16:52:40.245 STDOUT terraform:  } 2025-06-02 16:52:40.245773 | orchestrator | 16:52:40.245 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-06-02 16:52:40.245842 | orchestrator | 16:52:40.245 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-06-02 16:52:40.245910 | orchestrator | 16:52:40.245 STDOUT terraform:  + content = (known after apply) 2025-06-02 16:52:40.245980 | orchestrator | 16:52:40.245 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-02 16:52:40.250117 | orchestrator | 16:52:40.245 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-02 16:52:40.250178 | orchestrator | 16:52:40.246 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-02 16:52:40.250196 | orchestrator | 16:52:40.246 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-02 16:52:40.250203 | orchestrator | 16:52:40.246 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-02 16:52:40.250207 | orchestrator | 16:52:40.246 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-02 16:52:40.250212 | orchestrator | 16:52:40.246 STDOUT terraform:  + directory_permission = "0777" 2025-06-02 16:52:40.250218 | orchestrator | 16:52:40.246 STDOUT terraform:  + file_permission = "0644" 2025-06-02 16:52:40.250222 | orchestrator | 16:52:40.246 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-06-02 16:52:40.250226 | orchestrator | 16:52:40.246 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.250230 | orchestrator | 16:52:40.246 STDOUT terraform:  } 2025-06-02 16:52:40.250234 | orchestrator | 16:52:40.246 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-06-02 16:52:40.250238 | orchestrator | 16:52:40.246 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-06-02 16:52:40.250241 | orchestrator | 16:52:40.246 STDOUT terraform:  + content = (known after apply) 2025-06-02 16:52:40.250245 | orchestrator | 16:52:40.246 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-02 16:52:40.250331 | orchestrator | 16:52:40.246 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-02 16:52:40.250336 | orchestrator | 16:52:40.246 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-02 16:52:40.250340 | orchestrator | 16:52:40.246 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-02 16:52:40.250344 | orchestrator | 16:52:40.247 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-02 16:52:40.250347 | orchestrator | 16:52:40.247 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-02 16:52:40.250351 | orchestrator | 16:52:40.247 STDOUT terraform:  + directory_permission = "0777" 2025-06-02 16:52:40.250355 | orchestrator | 16:52:40.247 STDOUT terraform:  + file_permission = "0644" 2025-06-02 16:52:40.250359 | orchestrator | 16:52:40.247 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-06-02 16:52:40.250362 | orchestrator | 16:52:40.247 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.250366 | orchestrator | 16:52:40.247 STDOUT terraform:  } 2025-06-02 16:52:40.250370 | orchestrator | 16:52:40.247 STDOUT terraform:  # local_file.inventory will be created 2025-06-02 16:52:40.250374 | orchestrator | 16:52:40.247 STDOUT terraform:  + resource "local_file" "inventory" { 2025-06-02 16:52:40.250377 | orchestrator | 16:52:40.247 STDOUT terraform:  + content = (known after apply) 2025-06-02 16:52:40.250381 | orchestrator | 16:52:40.247 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-02 16:52:40.250385 | orchestrator | 16:52:40.247 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-02 16:52:40.250411 | orchestrator | 16:52:40.247 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-02 16:52:40.250421 | orchestrator | 16:52:40.247 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-02 16:52:40.250425 | orchestrator | 16:52:40.247 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-02 16:52:40.250429 | orchestrator | 16:52:40.247 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-02 16:52:40.250433 | orchestrator | 16:52:40.247 STDOUT terraform:  + directory_permission = "0777" 2025-06-02 16:52:40.250436 | orchestrator | 16:52:40.247 STDOUT terraform:  + file_permission = "0644" 2025-06-02 16:52:40.250440 | orchestrator | 16:52:40.248 STDOUT terraform:  + filename = "inventory.ci" 2025-06-02 16:52:40.250444 | orchestrator | 16:52:40.248 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.250462 | orchestrator | 16:52:40.248 STDOUT terraform:  } 2025-06-02 16:52:40.250466 | orchestrator | 16:52:40.248 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-06-02 16:52:40.250469 | orchestrator | 16:52:40.248 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-06-02 16:52:40.250473 | orchestrator | 16:52:40.248 STDOUT terraform:  + content = (sensitive value) 2025-06-02 16:52:40.250477 | orchestrator | 16:52:40.248 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-02 16:52:40.250484 | orchestrator | 16:52:40.248 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-02 16:52:40.250488 | orchestrator | 16:52:40.248 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-02 16:52:40.250492 | orchestrator | 16:52:40.248 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-02 16:52:40.250495 | orchestrator | 16:52:40.248 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-02 16:52:40.250499 | orchestrator | 16:52:40.248 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-02 16:52:40.250503 | orchestrator | 16:52:40.248 STDOUT terraform:  + directory_permission = "0700" 2025-06-02 16:52:40.250507 | orchestrator | 16:52:40.248 STDOUT terraform:  + file_permission = "0600" 2025-06-02 16:52:40.250510 | orchestrator | 16:52:40.248 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-06-02 16:52:40.250514 | orchestrator | 16:52:40.248 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.250518 | orchestrator | 16:52:40.248 STDOUT terraform:  } 2025-06-02 16:52:40.250522 | orchestrator | 16:52:40.248 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-06-02 16:52:40.250525 | orchestrator | 16:52:40.249 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-06-02 16:52:40.250529 | orchestrator | 16:52:40.249 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.250533 | orchestrator | 16:52:40.249 STDOUT terraform:  } 2025-06-02 16:52:40.250537 | orchestrator | 16:52:40.249 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-06-02 16:52:40.250543 | orchestrator | 16:52:40.249 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-06-02 16:52:40.250547 | orchestrator | 16:52:40.249 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 16:52:40.250555 | orchestrator | 16:52:40.249 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 16:52:40.250558 | orchestrator | 16:52:40.249 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.250562 | orchestrator | 16:52:40.249 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 16:52:40.250566 | orchestrator | 16:52:40.249 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 16:52:40.250570 | orchestrator | 16:52:40.249 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-06-02 16:52:40.250577 | orchestrator | 16:52:40.249 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.250581 | orchestrator | 16:52:40.249 STDOUT terraform:  + size = 80 2025-06-02 16:52:40.250584 | orchestrator | 16:52:40.249 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 16:52:40.250588 | orchestrator | 16:52:40.249 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 16:52:40.250592 | orchestrator | 16:52:40.249 STDOUT terraform:  } 2025-06-02 16:52:40.250596 | orchestrator | 16:52:40.249 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-06-02 16:52:40.250599 | orchestrator | 16:52:40.249 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-02 16:52:40.257001 | orchestrator | 16:52:40.249 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 16:52:40.257053 | orchestrator | 16:52:40.256 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 16:52:40.257075 | orchestrator | 16:52:40.257 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.257125 | orchestrator | 16:52:40.257 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 16:52:40.257178 | orchestrator | 16:52:40.257 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 16:52:40.257237 | orchestrator | 16:52:40.257 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-06-02 16:52:40.257332 | orchestrator | 16:52:40.257 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.257362 | orchestrator | 16:52:40.257 STDOUT terraform:  + size = 80 2025-06-02 16:52:40.257397 | orchestrator | 16:52:40.257 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 16:52:40.257430 | orchestrator | 16:52:40.257 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 16:52:40.257440 | orchestrator | 16:52:40.257 STDOUT terraform:  } 2025-06-02 16:52:40.257510 | orchestrator | 16:52:40.257 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-06-02 16:52:40.257568 | orchestrator | 16:52:40.257 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-02 16:52:40.257624 | orchestrator | 16:52:40.257 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 16:52:40.257648 | orchestrator | 16:52:40.257 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 16:52:40.257695 | orchestrator | 16:52:40.257 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.257742 | orchestrator | 16:52:40.257 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 16:52:40.257788 | orchestrator | 16:52:40.257 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 16:52:40.257848 | orchestrator | 16:52:40.257 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-06-02 16:52:40.257896 | orchestrator | 16:52:40.257 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.257923 | orchestrator | 16:52:40.257 STDOUT terraform:  + size = 80 2025-06-02 16:52:40.257954 | orchestrator | 16:52:40.257 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 16:52:40.257986 | orchestrator | 16:52:40.257 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 16:52:40.257993 | orchestrator | 16:52:40.257 STDOUT terraform:  } 2025-06-02 16:52:40.258078 | orchestrator | 16:52:40.257 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-06-02 16:52:40.258137 | orchestrator | 16:52:40.258 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-02 16:52:40.258183 | orchestrator | 16:52:40.258 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 16:52:40.258214 | orchestrator | 16:52:40.258 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 16:52:40.258295 | orchestrator | 16:52:40.258 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.258331 | orchestrator | 16:52:40.258 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 16:52:40.258379 | orchestrator | 16:52:40.258 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 16:52:40.258437 | orchestrator | 16:52:40.258 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-06-02 16:52:40.258480 | orchestrator | 16:52:40.258 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.258508 | orchestrator | 16:52:40.258 STDOUT terraform:  + size = 80 2025-06-02 16:52:40.258540 | orchestrator | 16:52:40.258 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 16:52:40.258573 | orchestrator | 16:52:40.258 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 16:52:40.258581 | orchestrator | 16:52:40.258 STDOUT terraform:  } 2025-06-02 16:52:40.258643 | orchestrator | 16:52:40.258 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-06-02 16:52:40.258700 | orchestrator | 16:52:40.258 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-02 16:52:40.258757 | orchestrator | 16:52:40.258 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 16:52:40.258789 | orchestrator | 16:52:40.258 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 16:52:40.258837 | orchestrator | 16:52:40.258 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.258884 | orchestrator | 16:52:40.258 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 16:52:40.258931 | orchestrator | 16:52:40.258 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 16:52:40.258989 | orchestrator | 16:52:40.258 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-06-02 16:52:40.259034 | orchestrator | 16:52:40.258 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.259052 | orchestrator | 16:52:40.259 STDOUT terraform:  + size = 80 2025-06-02 16:52:40.259089 | orchestrator | 16:52:40.259 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 16:52:40.259121 | orchestrator | 16:52:40.259 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 16:52:40.259127 | orchestrator | 16:52:40.259 STDOUT terraform:  } 2025-06-02 16:52:40.259201 | orchestrator | 16:52:40.259 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-06-02 16:52:40.259286 | orchestrator | 16:52:40.259 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-02 16:52:40.259337 | orchestrator | 16:52:40.259 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 16:52:40.259387 | orchestrator | 16:52:40.259 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 16:52:40.259435 | orchestrator | 16:52:40.259 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.259481 | orchestrator | 16:52:40.259 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 16:52:40.259530 | orchestrator | 16:52:40.259 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 16:52:40.259584 | orchestrator | 16:52:40.259 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-06-02 16:52:40.259630 | orchestrator | 16:52:40.259 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.259657 | orchestrator | 16:52:40.259 STDOUT terraform:  + size = 80 2025-06-02 16:52:40.259702 | orchestrator | 16:52:40.259 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 16:52:40.259709 | orchestrator | 16:52:40.259 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 16:52:40.259735 | orchestrator | 16:52:40.259 STDOUT terraform:  } 2025-06-02 16:52:40.259792 | orchestrator | 16:52:40.259 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-06-02 16:52:40.259851 | orchestrator | 16:52:40.259 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-02 16:52:40.259893 | orchestrator | 16:52:40.259 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 16:52:40.259923 | orchestrator | 16:52:40.259 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 16:52:40.259968 | orchestrator | 16:52:40.259 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.260037 | orchestrator | 16:52:40.259 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 16:52:40.260080 | orchestrator | 16:52:40.260 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 16:52:40.260160 | orchestrator | 16:52:40.260 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-06-02 16:52:40.260209 | orchestrator | 16:52:40.260 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.260217 | orchestrator | 16:52:40.260 STDOUT terraform:  + size = 80 2025-06-02 16:52:40.260285 | orchestrator | 16:52:40.260 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 16:52:40.260292 | orchestrator | 16:52:40.260 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 16:52:40.260298 | orchestrator | 16:52:40.260 STDOUT terraform:  } 2025-06-02 16:52:40.260354 | orchestrator | 16:52:40.260 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-06-02 16:52:40.261330 | orchestrator | 16:52:40.260 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 16:52:40.261369 | orchestrator | 16:52:40.261 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 16:52:40.261400 | orchestrator | 16:52:40.261 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 16:52:40.261442 | orchestrator | 16:52:40.261 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.261498 | orchestrator | 16:52:40.261 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 16:52:40.261548 | orchestrator | 16:52:40.261 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-06-02 16:52:40.261599 | orchestrator | 16:52:40.261 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.261621 | orchestrator | 16:52:40.261 STDOUT terraform:  + size = 20 2025-06-02 16:52:40.261673 | orchestrator | 16:52:40.261 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 16:52:40.261710 | orchestrator | 16:52:40.261 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 16:52:40.261718 | orchestrator | 16:52:40.261 STDOUT terraform:  } 2025-06-02 16:52:40.261779 | orchestrator | 16:52:40.261 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-06-02 16:52:40.261840 | orchestrator | 16:52:40.261 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 16:52:40.261886 | orchestrator | 16:52:40.261 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 16:52:40.261913 | orchestrator | 16:52:40.261 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 16:52:40.261958 | orchestrator | 16:52:40.261 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.262001 | orchestrator | 16:52:40.261 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 16:52:40.262068 | orchestrator | 16:52:40.261 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-06-02 16:52:40.262113 | orchestrator | 16:52:40.262 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.262146 | orchestrator | 16:52:40.262 STDOUT terraform:  + size = 20 2025-06-02 16:52:40.262178 | orchestrator | 16:52:40.262 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 16:52:40.262212 | orchestrator | 16:52:40.262 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 16:52:40.262220 | orchestrator | 16:52:40.262 STDOUT terraform:  } 2025-06-02 16:52:40.262335 | orchestrator | 16:52:40.262 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-06-02 16:52:40.262388 | orchestrator | 16:52:40.262 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 16:52:40.262432 | orchestrator | 16:52:40.262 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 16:52:40.262464 | orchestrator | 16:52:40.262 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 16:52:40.262507 | orchestrator | 16:52:40.262 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.262553 | orchestrator | 16:52:40.262 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 16:52:40.262600 | orchestrator | 16:52:40.262 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-06-02 16:52:40.262642 | orchestrator | 16:52:40.262 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.262669 | orchestrator | 16:52:40.262 STDOUT terraform:  + size = 20 2025-06-02 16:52:40.262698 | orchestrator | 16:52:40.262 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 16:52:40.262727 | orchestrator | 16:52:40.262 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 16:52:40.262734 | orchestrator | 16:52:40.262 STDOUT terraform:  } 2025-06-02 16:52:40.262798 | orchestrator | 16:52:40.262 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-06-02 16:52:40.262844 | orchestrator | 16:52:40.262 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 16:52:40.262887 | orchestrator | 16:52:40.262 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 16:52:40.262915 | orchestrator | 16:52:40.262 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 16:52:40.262955 | orchestrator | 16:52:40.262 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.262994 | orchestrator | 16:52:40.262 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 16:52:40.263037 | orchestrator | 16:52:40.262 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-06-02 16:52:40.263094 | orchestrator | 16:52:40.263 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.263118 | orchestrator | 16:52:40.263 STDOUT terraform:  + size = 20 2025-06-02 16:52:40.263153 | orchestrator | 16:52:40.263 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 16:52:40.263175 | orchestrator | 16:52:40.263 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 16:52:40.263183 | orchestrator | 16:52:40.263 STDOUT terraform:  } 2025-06-02 16:52:40.263244 | orchestrator | 16:52:40.263 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-06-02 16:52:40.263293 | orchestrator | 16:52:40.263 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 16:52:40.263332 | orchestrator | 16:52:40.263 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 16:52:40.263367 | orchestrator | 16:52:40.263 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 16:52:40.263406 | orchestrator | 16:52:40.263 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.263446 | orchestrator | 16:52:40.263 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 16:52:40.263490 | orchestrator | 16:52:40.263 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-06-02 16:52:40.263532 | orchestrator | 16:52:40.263 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.263573 | orchestrator | 16:52:40.263 STDOUT terraform:  + size = 20 2025-06-02 16:52:40.263600 | orchestrator | 16:52:40.263 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 16:52:40.263628 | orchestrator | 16:52:40.263 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 16:52:40.263644 | orchestrator | 16:52:40.263 STDOUT terraform:  } 2025-06-02 16:52:40.263691 | orchestrator | 16:52:40.263 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-06-02 16:52:40.263738 | orchestrator | 16:52:40.263 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 16:52:40.263778 | orchestrator | 16:52:40.263 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 16:52:40.263804 | orchestrator | 16:52:40.263 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 16:52:40.263846 | orchestrator | 16:52:40.263 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.263886 | orchestrator | 16:52:40.263 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 16:52:40.263933 | orchestrator | 16:52:40.263 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-06-02 16:52:40.263971 | orchestrator | 16:52:40.263 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.263994 | orchestrator | 16:52:40.263 STDOUT terraform:  + size = 20 2025-06-02 16:52:40.264027 | orchestrator | 16:52:40.263 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 16:52:40.264053 | orchestrator | 16:52:40.264 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 16:52:40.264060 | orchestrator | 16:52:40.264 STDOUT terraform:  } 2025-06-02 16:52:40.264114 | orchestrator | 16:52:40.264 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-06-02 16:52:40.264162 | orchestrator | 16:52:40.264 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 16:52:40.264201 | orchestrator | 16:52:40.264 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 16:52:40.264242 | orchestrator | 16:52:40.264 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 16:52:40.264298 | orchestrator | 16:52:40.264 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.264337 | orchestrator | 16:52:40.264 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 16:52:40.264378 | orchestrator | 16:52:40.264 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-06-02 16:52:40.264407 | orchestrator | 16:52:40.264 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.264438 | orchestrator | 16:52:40.264 STDOUT terraform:  + size = 20 2025-06-02 16:52:40.264468 | orchestrator | 16:52:40.264 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 16:52:40.264489 | orchestrator | 16:52:40.264 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 16:52:40.264496 | orchestrator | 16:52:40.264 STDOUT terraform:  } 2025-06-02 16:52:40.264550 | orchestrator | 16:52:40.264 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-06-02 16:52:40.264606 | orchestrator | 16:52:40.264 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 16:52:40.264636 | orchestrator | 16:52:40.264 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 16:52:40.264663 | orchestrator | 16:52:40.264 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 16:52:40.264704 | orchestrator | 16:52:40.264 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.264742 | orchestrator | 16:52:40.264 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 16:52:40.264786 | orchestrator | 16:52:40.264 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-06-02 16:52:40.264827 | orchestrator | 16:52:40.264 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.264850 | orchestrator | 16:52:40.264 STDOUT terraform:  + size = 20 2025-06-02 16:52:40.264909 | orchestrator | 16:52:40.264 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 16:52:40.264916 | orchestrator | 16:52:40.264 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 16:52:40.264920 | orchestrator | 16:52:40.264 STDOUT terraform:  } 2025-06-02 16:52:40.264959 | orchestrator | 16:52:40.264 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-06-02 16:52:40.265009 | orchestrator | 16:52:40.264 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 16:52:40.265046 | orchestrator | 16:52:40.264 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 16:52:40.265073 | orchestrator | 16:52:40.265 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 16:52:40.265117 | orchestrator | 16:52:40.265 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.265152 | orchestrator | 16:52:40.265 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 16:52:40.265201 | orchestrator | 16:52:40.265 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-06-02 16:52:40.265236 | orchestrator | 16:52:40.265 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.265299 | orchestrator | 16:52:40.265 STDOUT terraform:  + size = 20 2025-06-02 16:52:40.265308 | orchestrator | 16:52:40.265 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 16:52:40.265338 | orchestrator | 16:52:40.265 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 16:52:40.265345 | orchestrator | 16:52:40.265 STDOUT terraform:  } 2025-06-02 16:52:40.265401 | orchestrator | 16:52:40.265 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-06-02 16:52:40.265452 | orchestrator | 16:52:40.265 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-06-02 16:52:40.265485 | orchestrator | 16:52:40.265 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-02 16:52:40.265524 | orchestrator | 16:52:40.265 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-02 16:52:40.265597 | orchestrator | 16:52:40.265 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-02 16:52:40.265604 | orchestrator | 16:52:40.265 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 16:52:40.265624 | orchestrator | 16:52:40.265 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 16:52:40.265648 | orchestrator | 16:52:40.265 STDOUT terraform:  + config_drive = true 2025-06-02 16:52:40.265688 | orchestrator | 16:52:40.265 STDOUT terraform:  + created = (known after apply) 2025-06-02 16:52:40.265727 | orchestrator | 16:52:40.265 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-02 16:52:40.265760 | orchestrator | 16:52:40.265 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-06-02 16:52:40.265787 | orchestrator | 16:52:40.265 STDOUT terraform:  + force_delete = false 2025-06-02 16:52:40.265826 | orchestrator | 16:52:40.265 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-02 16:52:40.265863 | orchestrator | 16:52:40.265 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.265901 | orchestrator | 16:52:40.265 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 16:52:40.265936 | orchestrator | 16:52:40.265 STDOUT terraform:  + image_name = (known after apply) 2025-06-02 16:52:40.265962 | orchestrator | 16:52:40.265 STDOUT terraform:  + key_pair = "testbed" 2025-06-02 16:52:40.265996 | orchestrator | 16:52:40.265 STDOUT terraform:  + name = "testbed-manager" 2025-06-02 16:52:40.266041 | orchestrator | 16:52:40.265 STDOUT terraform:  + power_state = "active" 2025-06-02 16:52:40.267893 | orchestrator | 16:52:40.267 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.267930 | orchestrator | 16:52:40.267 STDOUT terraform:  + security_groups = (known after apply) 2025-06-02 16:52:40.267934 | orchestrator | 16:52:40.267 STDOUT terraform:  + stop_before_destroy = false 2025-06-02 16:52:40.267960 | orchestrator | 16:52:40.267 STDOUT terraform:  + updated = (known after apply) 2025-06-02 16:52:40.267998 | orchestrator | 16:52:40.267 STDOUT terraform:  + user_data = (known after apply) 2025-06-02 16:52:40.268009 | orchestrator | 16:52:40.267 STDOUT terraform:  + block_device { 2025-06-02 16:52:40.268050 | orchestrator | 16:52:40.268 STDOUT terraform:  + boot_index = 0 2025-06-02 16:52:40.268079 | orchestrator | 16:52:40.268 STDOUT terraform:  + delete_on_termination = false 2025-06-02 16:52:40.268120 | orchestrator | 16:52:40.268 STDOUT terraform:  + destination_type = "volume" 2025-06-02 16:52:40.268144 | orchestrator | 16:52:40.268 STDOUT terraform:  + multiattach = false 2025-06-02 16:52:40.268176 | orchestrator | 16:52:40.268 STDOUT terraform:  + source_type = "volume" 2025-06-02 16:52:40.268215 | orchestrator | 16:52:40.268 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 16:52:40.268223 | orchestrator | 16:52:40.268 STDOUT terraform:  } 2025-06-02 16:52:40.268242 | orchestrator | 16:52:40.268 STDOUT terraform:  + network { 2025-06-02 16:52:40.268272 | orchestrator | 16:52:40.268 STDOUT terraform:  + access_network = false 2025-06-02 16:52:40.268307 | orchestrator | 16:52:40.268 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-02 16:52:40.268340 | orchestrator | 16:52:40.268 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-02 16:52:40.268373 | orchestrator | 16:52:40.268 STDOUT terraform:  + mac = (known after apply) 2025-06-02 16:52:40.268407 | orchestrator | 16:52:40.268 STDOUT terraform:  + name = (known after apply) 2025-06-02 16:52:40.268439 | orchestrator | 16:52:40.268 STDOUT terraform:  + port = (known after apply) 2025-06-02 16:52:40.268472 | orchestrator | 16:52:40.268 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 16:52:40.268489 | orchestrator | 16:52:40.268 STDOUT terraform:  } 2025-06-02 16:52:40.268495 | orchestrator | 16:52:40.268 STDOUT terraform:  } 2025-06-02 16:52:40.268549 | orchestrator | 16:52:40.268 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-06-02 16:52:40.268588 | orchestrator | 16:52:40.268 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-02 16:52:40.268626 | orchestrator | 16:52:40.268 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-02 16:52:40.268661 | orchestrator | 16:52:40.268 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-02 16:52:40.268696 | orchestrator | 16:52:40.268 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-02 16:52:40.268763 | orchestrator | 16:52:40.268 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 16:52:40.268789 | orchestrator | 16:52:40.268 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 16:52:40.268812 | orchestrator | 16:52:40.268 STDOUT terraform:  + config_drive = true 2025-06-02 16:52:40.268848 | orchestrator | 16:52:40.268 STDOUT terraform:  + created = (known after apply) 2025-06-02 16:52:40.268883 | orchestrator | 16:52:40.268 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-02 16:52:40.268915 | orchestrator | 16:52:40.268 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-02 16:52:40.268938 | orchestrator | 16:52:40.268 STDOUT terraform:  + force_delete = false 2025-06-02 16:52:40.268975 | orchestrator | 16:52:40.268 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-02 16:52:40.269025 | orchestrator | 16:52:40.268 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.269066 | orchestrator | 16:52:40.269 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 16:52:40.269103 | orchestrator | 16:52:40.269 STDOUT terraform:  + image_name = (known after apply) 2025-06-02 16:52:40.269130 | orchestrator | 16:52:40.269 STDOUT terraform:  + key_pair = "testbed" 2025-06-02 16:52:40.269163 | orchestrator | 16:52:40.269 STDOUT terraform:  + name = "testbed-node-0" 2025-06-02 16:52:40.269188 | orchestrator | 16:52:40.269 STDOUT terraform:  + power_state = "active" 2025-06-02 16:52:40.269234 | orchestrator | 16:52:40.269 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.269293 | orchestrator | 16:52:40.269 STDOUT terraform:  + security_groups = (known after apply) 2025-06-02 16:52:40.269301 | orchestrator | 16:52:40.269 STDOUT terraform:  + stop_before_destroy = false 2025-06-02 16:52:40.269325 | orchestrator | 16:52:40.269 STDOUT terraform:  + updated = (known after apply) 2025-06-02 16:52:40.269378 | orchestrator | 16:52:40.269 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-02 16:52:40.269387 | orchestrator | 16:52:40.269 STDOUT terraform:  + block_device { 2025-06-02 16:52:40.269416 | orchestrator | 16:52:40.269 STDOUT terraform:  + boot_index = 0 2025-06-02 16:52:40.269445 | orchestrator | 16:52:40.269 STDOUT terraform:  + delete_on_termination = false 2025-06-02 16:52:40.269475 | orchestrator | 16:52:40.269 STDOUT terraform:  + destination_type = "volume" 2025-06-02 16:52:40.269503 | orchestrator | 16:52:40.269 STDOUT terraform:  + multiattach = false 2025-06-02 16:52:40.269535 | orchestrator | 16:52:40.269 STDOUT terraform:  + source_type = "volume" 2025-06-02 16:52:40.269573 | orchestrator | 16:52:40.269 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 16:52:40.269580 | orchestrator | 16:52:40.269 STDOUT terraform:  } 2025-06-02 16:52:40.269601 | orchestrator | 16:52:40.269 STDOUT terraform:  + network { 2025-06-02 16:52:40.269622 | orchestrator | 16:52:40.269 STDOUT terraform:  + access_network = false 2025-06-02 16:52:40.269657 | orchestrator | 16:52:40.269 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-02 16:52:40.269690 | orchestrator | 16:52:40.269 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-02 16:52:40.269722 | orchestrator | 16:52:40.269 STDOUT terraform:  + mac = (known after apply) 2025-06-02 16:52:40.269755 | orchestrator | 16:52:40.269 STDOUT terraform:  + name = (known after apply) 2025-06-02 16:52:40.269786 | orchestrator | 16:52:40.269 STDOUT terraform:  + port = (known after apply) 2025-06-02 16:52:40.269819 | orchestrator | 16:52:40.269 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 16:52:40.269825 | orchestrator | 16:52:40.269 STDOUT terraform:  } 2025-06-02 16:52:40.269845 | orchestrator | 16:52:40.269 STDOUT terraform:  } 2025-06-02 16:52:40.269890 | orchestrator | 16:52:40.269 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-06-02 16:52:40.269933 | orchestrator | 16:52:40.269 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-02 16:52:40.269969 | orchestrator | 16:52:40.269 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-02 16:52:40.270006 | orchestrator | 16:52:40.269 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-02 16:52:40.270060 | orchestrator | 16:52:40.269 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-02 16:52:40.270097 | orchestrator | 16:52:40.270 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 16:52:40.270122 | orchestrator | 16:52:40.270 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 16:52:40.270146 | orchestrator | 16:52:40.270 STDOUT terraform:  + config_drive = true 2025-06-02 16:52:40.270185 | orchestrator | 16:52:40.270 STDOUT terraform:  + created = (known after apply) 2025-06-02 16:52:40.270220 | orchestrator | 16:52:40.270 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-02 16:52:40.270264 | orchestrator | 16:52:40.270 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-02 16:52:40.270287 | orchestrator | 16:52:40.270 STDOUT terraform:  + force_delete = false 2025-06-02 16:52:40.270321 | orchestrator | 16:52:40.270 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-02 16:52:40.270359 | orchestrator | 16:52:40.270 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.270395 | orchestrator | 16:52:40.270 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 16:52:40.270432 | orchestrator | 16:52:40.270 STDOUT terraform:  + image_name = (known after apply) 2025-06-02 16:52:40.270446 | orchestrator | 16:52:40.270 STDOUT terraform:  + key_pair = "testbed" 2025-06-02 16:52:40.270502 | orchestrator | 16:52:40.270 STDOUT terraform:  + name = "testbed-node-1" 2025-06-02 16:52:40.270527 | orchestrator | 16:52:40.270 STDOUT terraform:  + power_state = "active" 2025-06-02 16:52:40.270563 | orchestrator | 16:52:40.270 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.270598 | orchestrator | 16:52:40.270 STDOUT terraform:  + security_groups = (known after apply) 2025-06-02 16:52:40.270622 | orchestrator | 16:52:40.270 STDOUT terraform:  + stop_before_destroy = false 2025-06-02 16:52:40.270660 | orchestrator | 16:52:40.270 STDOUT terraform:  + updated = (known after apply) 2025-06-02 16:52:40.270713 | orchestrator | 16:52:40.270 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-02 16:52:40.270720 | orchestrator | 16:52:40.270 STDOUT terraform:  + block_device { 2025-06-02 16:52:40.270750 | orchestrator | 16:52:40.270 STDOUT terraform:  + boot_index = 0 2025-06-02 16:52:40.270780 | orchestrator | 16:52:40.270 STDOUT terraform:  + delete_on_termination = false 2025-06-02 16:52:40.270809 | orchestrator | 16:52:40.270 STDOUT terraform:  + destination_type = "volume" 2025-06-02 16:52:40.270839 | orchestrator | 16:52:40.270 STDOUT terraform:  + multiattach = false 2025-06-02 16:52:40.270884 | orchestrator | 16:52:40.270 STDOUT terraform:  + source_type = "volume" 2025-06-02 16:52:40.270910 | orchestrator | 16:52:40.270 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 16:52:40.270917 | orchestrator | 16:52:40.270 STDOUT terraform:  } 2025-06-02 16:52:40.270924 | orchestrator | 16:52:40.270 STDOUT terraform:  + network { 2025-06-02 16:52:40.270953 | orchestrator | 16:52:40.270 STDOUT terraform:  + access_network = false 2025-06-02 16:52:40.270986 | orchestrator | 16:52:40.270 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-02 16:52:40.271016 | orchestrator | 16:52:40.270 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-02 16:52:40.271049 | orchestrator | 16:52:40.271 STDOUT terraform:  + mac = (known after apply) 2025-06-02 16:52:40.271083 | orchestrator | 16:52:40.271 STDOUT terraform:  + name = (known after apply) 2025-06-02 16:52:40.271115 | orchestrator | 16:52:40.271 STDOUT terraform:  + port = (known after apply) 2025-06-02 16:52:40.271146 | orchestrator | 16:52:40.271 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 16:52:40.271205 | orchestrator | 16:52:40.271 STDOUT terraform:  } 2025-06-02 16:52:40.271210 | orchestrator | 16:52:40.271 STDOUT terraform:  } 2025-06-02 16:52:40.271233 | orchestrator | 16:52:40.271 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-06-02 16:52:40.271308 | orchestrator | 16:52:40.271 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-02 16:52:40.271346 | orchestrator | 16:52:40.271 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-02 16:52:40.271383 | orchestrator | 16:52:40.271 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-02 16:52:40.271419 | orchestrator | 16:52:40.271 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-02 16:52:40.271455 | orchestrator | 16:52:40.271 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 16:52:40.271481 | orchestrator | 16:52:40.271 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 16:52:40.271501 | orchestrator | 16:52:40.271 STDOUT terraform:  + config_drive = true 2025-06-02 16:52:40.271537 | orchestrator | 16:52:40.271 STDOUT terraform:  + created = (known after apply) 2025-06-02 16:52:40.271573 | orchestrator | 16:52:40.271 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-02 16:52:40.271605 | orchestrator | 16:52:40.271 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-02 16:52:40.271629 | orchestrator | 16:52:40.271 STDOUT terraform:  + force_delete = false 2025-06-02 16:52:40.271663 | orchestrator | 16:52:40.271 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-02 16:52:40.271707 | orchestrator | 16:52:40.271 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.271744 | orchestrator | 16:52:40.271 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 16:52:40.271782 | orchestrator | 16:52:40.271 STDOUT terraform:  + image_name = (known after apply) 2025-06-02 16:52:40.271808 | orchestrator | 16:52:40.271 STDOUT terraform:  + key_pair = "testbed" 2025-06-02 16:52:40.271840 | orchestrator | 16:52:40.271 STDOUT terraform:  + name = "testbed-node-2" 2025-06-02 16:52:40.271865 | orchestrator | 16:52:40.271 STDOUT terraform:  + power_state = "active" 2025-06-02 16:52:40.271902 | orchestrator | 16:52:40.271 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.271937 | orchestrator | 16:52:40.271 STDOUT terraform:  + security_groups = (known after apply) 2025-06-02 16:52:40.271962 | orchestrator | 16:52:40.271 STDOUT terraform:  + stop_before_destroy = false 2025-06-02 16:52:40.271997 | orchestrator | 16:52:40.271 STDOUT terraform:  + updated = (known after apply) 2025-06-02 16:52:40.272049 | orchestrator | 16:52:40.271 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-02 16:52:40.272057 | orchestrator | 16:52:40.272 STDOUT terraform:  + block_device { 2025-06-02 16:52:40.272086 | orchestrator | 16:52:40.272 STDOUT terraform:  + boot_index = 0 2025-06-02 16:52:40.272115 | orchestrator | 16:52:40.272 STDOUT terraform:  + delete_on_termination = false 2025-06-02 16:52:40.272147 | orchestrator | 16:52:40.272 STDOUT terraform:  + destination_type = "volume" 2025-06-02 16:52:40.272176 | orchestrator | 16:52:40.272 STDOUT terraform:  + multiattach = false 2025-06-02 16:52:40.272209 | orchestrator | 16:52:40.272 STDOUT terraform:  + source_type = "volume" 2025-06-02 16:52:40.272260 | orchestrator | 16:52:40.272 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 16:52:40.272268 | orchestrator | 16:52:40.272 STDOUT terraform:  } 2025-06-02 16:52:40.272274 | orchestrator | 16:52:40.272 STDOUT terraform:  + network { 2025-06-02 16:52:40.272301 | orchestrator | 16:52:40.272 STDOUT terraform:  + access_network = false 2025-06-02 16:52:40.272333 | orchestrator | 16:52:40.272 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-02 16:52:40.272364 | orchestrator | 16:52:40.272 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-02 16:52:40.272397 | orchestrator | 16:52:40.272 STDOUT terraform:  + mac = (known after apply) 2025-06-02 16:52:40.272429 | orchestrator | 16:52:40.272 STDOUT terraform:  + name = (known after apply) 2025-06-02 16:52:40.272460 | orchestrator | 16:52:40.272 STDOUT terraform:  + port = (known after apply) 2025-06-02 16:52:40.272494 | orchestrator | 16:52:40.272 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 16:52:40.272500 | orchestrator | 16:52:40.272 STDOUT terraform:  } 2025-06-02 16:52:40.272519 | orchestrator | 16:52:40.272 STDOUT terraform:  } 2025-06-02 16:52:40.272621 | orchestrator | 16:52:40.272 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-06-02 16:52:40.272665 | orchestrator | 16:52:40.272 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-02 16:52:40.272701 | orchestrator | 16:52:40.272 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-02 16:52:40.272736 | orchestrator | 16:52:40.272 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-02 16:52:40.272778 | orchestrator | 16:52:40.272 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-02 16:52:40.272809 | orchestrator | 16:52:40.272 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 16:52:40.272834 | orchestrator | 16:52:40.272 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 16:52:40.272857 | orchestrator | 16:52:40.272 STDOUT terraform:  + config_drive = true 2025-06-02 16:52:40.272894 | orchestrator | 16:52:40.272 STDOUT terraform:  + created = (known after apply) 2025-06-02 16:52:40.272930 | orchestrator | 16:52:40.272 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-02 16:52:40.272962 | orchestrator | 16:52:40.272 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-02 16:52:40.272987 | orchestrator | 16:52:40.272 STDOUT terraform:  + force_delete = false 2025-06-02 16:52:40.273022 | orchestrator | 16:52:40.272 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-02 16:52:40.273060 | orchestrator | 16:52:40.273 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.273096 | orchestrator | 16:52:40.273 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 16:52:40.273131 | orchestrator | 16:52:40.273 STDOUT terraform:  + image_name = (known after apply) 2025-06-02 16:52:40.273156 | orchestrator | 16:52:40.273 STDOUT terraform:  + key_pair = "testbed" 2025-06-02 16:52:40.273190 | orchestrator | 16:52:40.273 STDOUT terraform:  + name = "testbed-node-3" 2025-06-02 16:52:40.273215 | orchestrator | 16:52:40.273 STDOUT terraform:  + power_state = "active" 2025-06-02 16:52:40.273266 | orchestrator | 16:52:40.273 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.273301 | orchestrator | 16:52:40.273 STDOUT terraform:  + security_groups = (known after apply) 2025-06-02 16:52:40.273327 | orchestrator | 16:52:40.273 STDOUT terraform:  + stop_before_destroy = false 2025-06-02 16:52:40.273364 | orchestrator | 16:52:40.273 STDOUT terraform:  + updated = (known after apply) 2025-06-02 16:52:40.273460 | orchestrator | 16:52:40.273 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-02 16:52:40.273468 | orchestrator | 16:52:40.273 STDOUT terraform:  + block_device { 2025-06-02 16:52:40.273497 | orchestrator | 16:52:40.273 STDOUT terraform:  + boot_index = 0 2025-06-02 16:52:40.273527 | orchestrator | 16:52:40.273 STDOUT terraform:  + delete_on_termination = false 2025-06-02 16:52:40.273558 | orchestrator | 16:52:40.273 STDOUT terraform:  + destination_type = "volume" 2025-06-02 16:52:40.273599 | orchestrator | 16:52:40.273 STDOUT terraform:  + multiattach = false 2025-06-02 16:52:40.273630 | orchestrator | 16:52:40.273 STDOUT terraform:  + source_type = "volume" 2025-06-02 16:52:40.273670 | orchestrator | 16:52:40.273 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 16:52:40.273676 | orchestrator | 16:52:40.273 STDOUT terraform:  } 2025-06-02 16:52:40.273683 | orchestrator | 16:52:40.273 STDOUT terraform:  + network { 2025-06-02 16:52:40.273713 | orchestrator | 16:52:40.273 STDOUT terraform:  + access_network = false 2025-06-02 16:52:40.273747 | orchestrator | 16:52:40.273 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-02 16:52:40.273778 | orchestrator | 16:52:40.273 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-02 16:52:40.273812 | orchestrator | 16:52:40.273 STDOUT terraform:  + mac = (known after apply) 2025-06-02 16:52:40.273845 | orchestrator | 16:52:40.273 STDOUT terraform:  + name = (known after apply) 2025-06-02 16:52:40.273877 | orchestrator | 16:52:40.273 STDOUT terraform:  + port = (known after apply) 2025-06-02 16:52:40.273910 | orchestrator | 16:52:40.273 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 16:52:40.273917 | orchestrator | 16:52:40.273 STDOUT terraform:  } 2025-06-02 16:52:40.273923 | orchestrator | 16:52:40.273 STDOUT terraform:  } 2025-06-02 16:52:40.273975 | orchestrator | 16:52:40.273 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-06-02 16:52:40.274034 | orchestrator | 16:52:40.273 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-02 16:52:40.274073 | orchestrator | 16:52:40.274 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-02 16:52:40.274109 | orchestrator | 16:52:40.274 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-02 16:52:40.274144 | orchestrator | 16:52:40.274 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-02 16:52:40.274179 | orchestrator | 16:52:40.274 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 16:52:40.274204 | orchestrator | 16:52:40.274 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 16:52:40.274225 | orchestrator | 16:52:40.274 STDOUT terraform:  + config_drive = true 2025-06-02 16:52:40.274292 | orchestrator | 16:52:40.274 STDOUT terraform:  + created = (known after apply) 2025-06-02 16:52:40.274300 | orchestrator | 16:52:40.274 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-02 16:52:40.274337 | orchestrator | 16:52:40.274 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-02 16:52:40.274362 | orchestrator | 16:52:40.274 STDOUT terraform:  + force_delete = false 2025-06-02 16:52:40.274397 | orchestrator | 16:52:40.274 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-02 16:52:40.274434 | orchestrator | 16:52:40.274 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.274472 | orchestrator | 16:52:40.274 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 16:52:40.274510 | orchestrator | 16:52:40.274 STDOUT terraform:  + image_name = (known after apply) 2025-06-02 16:52:40.274537 | orchestrator | 16:52:40.274 STDOUT terraform:  + key_pair = "testbed" 2025-06-02 16:52:40.274603 | orchestrator | 16:52:40.274 STDOUT terraform:  + name = "testbed-node-4" 2025-06-02 16:52:40.274628 | orchestrator | 16:52:40.274 STDOUT terraform:  + power_state = "active" 2025-06-02 16:52:40.274666 | orchestrator | 16:52:40.274 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.274702 | orchestrator | 16:52:40.274 STDOUT terraform:  + security_groups = (known after apply) 2025-06-02 16:52:40.274726 | orchestrator | 16:52:40.274 STDOUT terraform:  + stop_before_destroy = false 2025-06-02 16:52:40.274762 | orchestrator | 16:52:40.274 STDOUT terraform:  + updated = (known after apply) 2025-06-02 16:52:40.274815 | orchestrator | 16:52:40.274 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-02 16:52:40.274823 | orchestrator | 16:52:40.274 STDOUT terraform:  + block_device { 2025-06-02 16:52:40.274854 | orchestrator | 16:52:40.274 STDOUT terraform:  + boot_index = 0 2025-06-02 16:52:40.274882 | orchestrator | 16:52:40.274 STDOUT terraform:  + delete_on_termination = false 2025-06-02 16:52:40.274913 | orchestrator | 16:52:40.274 STDOUT terraform:  + destination_type = "volume" 2025-06-02 16:52:40.274949 | orchestrator | 16:52:40.274 STDOUT terraform:  + multiattach = false 2025-06-02 16:52:40.274974 | orchestrator | 16:52:40.274 STDOUT terraform:  + source_type = "volume" 2025-06-02 16:52:40.275013 | orchestrator | 16:52:40.274 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 16:52:40.275021 | orchestrator | 16:52:40.275 STDOUT terraform:  } 2025-06-02 16:52:40.275039 | orchestrator | 16:52:40.275 STDOUT terraform:  + network { 2025-06-02 16:52:40.275061 | orchestrator | 16:52:40.275 STDOUT terraform:  + access_network = false 2025-06-02 16:52:40.275093 | orchestrator | 16:52:40.275 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-02 16:52:40.275124 | orchestrator | 16:52:40.275 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-02 16:52:40.275159 | orchestrator | 16:52:40.275 STDOUT terraform:  + mac = (known after apply) 2025-06-02 16:52:40.275192 | orchestrator | 16:52:40.275 STDOUT terraform:  + name = (known after apply) 2025-06-02 16:52:40.275225 | orchestrator | 16:52:40.275 STDOUT terraform:  + port = (known after apply) 2025-06-02 16:52:40.275283 | orchestrator | 16:52:40.275 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 16:52:40.275298 | orchestrator | 16:52:40.275 STDOUT terraform:  } 2025-06-02 16:52:40.275304 | orchestrator | 16:52:40.275 STDOUT terraform:  } 2025-06-02 16:52:40.275359 | orchestrator | 16:52:40.275 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-06-02 16:52:40.275402 | orchestrator | 16:52:40.275 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-02 16:52:40.275440 | orchestrator | 16:52:40.275 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-02 16:52:40.275477 | orchestrator | 16:52:40.275 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-02 16:52:40.275512 | orchestrator | 16:52:40.275 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-02 16:52:40.275548 | orchestrator | 16:52:40.275 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 16:52:40.275574 | orchestrator | 16:52:40.275 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 16:52:40.275596 | orchestrator | 16:52:40.275 STDOUT terraform:  + config_drive = true 2025-06-02 16:52:40.275632 | orchestrator | 16:52:40.275 STDOUT terraform:  + created = (known after apply) 2025-06-02 16:52:40.275668 | orchestrator | 16:52:40.275 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-02 16:52:40.275699 | orchestrator | 16:52:40.275 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-02 16:52:40.275725 | orchestrator | 16:52:40.275 STDOUT terraform:  + force_delete = false 2025-06-02 16:52:40.275759 | orchestrator | 16:52:40.275 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-02 16:52:40.275798 | orchestrator | 16:52:40.275 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.275837 | orchestrator | 16:52:40.275 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 16:52:40.275869 | orchestrator | 16:52:40.275 STDOUT terraform:  + image_name = (known after apply) 2025-06-02 16:52:40.275895 | orchestrator | 16:52:40.275 STDOUT terraform:  + key_pair = "testbed" 2025-06-02 16:52:40.275927 | orchestrator | 16:52:40.275 STDOUT terraform:  + name = "testbed-node-5" 2025-06-02 16:52:40.275952 | orchestrator | 16:52:40.275 STDOUT terraform:  + power_state = "active" 2025-06-02 16:52:40.275987 | orchestrator | 16:52:40.275 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.276021 | orchestrator | 16:52:40.275 STDOUT terraform:  + security_groups = (known after apply) 2025-06-02 16:52:40.276046 | orchestrator | 16:52:40.276 STDOUT terraform:  + stop_before_destroy = false 2025-06-02 16:52:40.276083 | orchestrator | 16:52:40.276 STDOUT terraform:  + updated = (known after apply) 2025-06-02 16:52:40.276133 | orchestrator | 16:52:40.276 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-02 16:52:40.276141 | orchestrator | 16:52:40.276 STDOUT terraform:  + block_device { 2025-06-02 16:52:40.276172 | orchestrator | 16:52:40.276 STDOUT terraform:  + boot_index = 0 2025-06-02 16:52:40.276201 | orchestrator | 16:52:40.276 STDOUT terraform:  + delete_on_termination = false 2025-06-02 16:52:40.276232 | orchestrator | 16:52:40.276 STDOUT terraform:  + destination_type = "volume" 2025-06-02 16:52:40.276277 | orchestrator | 16:52:40.276 STDOUT terraform:  + multiattach = false 2025-06-02 16:52:40.276307 | orchestrator | 16:52:40.276 STDOUT terraform:  + source_type = "volume" 2025-06-02 16:52:40.276345 | orchestrator | 16:52:40.276 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 16:52:40.276352 | orchestrator | 16:52:40.276 STDOUT terraform:  } 2025-06-02 16:52:40.276373 | orchestrator | 16:52:40.276 STDOUT terraform:  + network { 2025-06-02 16:52:40.276395 | orchestrator | 16:52:40.276 STDOUT terraform:  + access_network = false 2025-06-02 16:52:40.276447 | orchestrator | 16:52:40.276 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-02 16:52:40.276479 | orchestrator | 16:52:40.276 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-02 16:52:40.276513 | orchestrator | 16:52:40.276 STDOUT terraform:  + mac = (known after apply) 2025-06-02 16:52:40.276545 | orchestrator | 16:52:40.276 STDOUT terraform:  + name = (known after apply) 2025-06-02 16:52:40.276579 | orchestrator | 16:52:40.276 STDOUT terraform:  + port = (known after apply) 2025-06-02 16:52:40.276612 | orchestrator | 16:52:40.276 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 16:52:40.276619 | orchestrator | 16:52:40.276 STDOUT terraform:  } 2025-06-02 16:52:40.276626 | orchestrator | 16:52:40.276 STDOUT terraform:  } 2025-06-02 16:52:40.276668 | orchestrator | 16:52:40.276 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-06-02 16:52:40.276705 | orchestrator | 16:52:40.276 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-06-02 16:52:40.276739 | orchestrator | 16:52:40.276 STDOUT terraform:  + fingerprint = (known after apply) 2025-06-02 16:52:40.276768 | orchestrator | 16:52:40.276 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.276794 | orchestrator | 16:52:40.276 STDOUT terraform:  + name = "testbed" 2025-06-02 16:52:40.276818 | orchestrator | 16:52:40.276 STDOUT terraform:  + private_key = (sensitive value) 2025-06-02 16:52:40.276847 | orchestrator | 16:52:40.276 STDOUT terraform:  + public_key = (known after apply) 2025-06-02 16:52:40.276892 | orchestrator | 16:52:40.276 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.276945 | orchestrator | 16:52:40.276 STDOUT terraform:  + user_id = (known after apply) 2025-06-02 16:52:40.276952 | orchestrator | 16:52:40.276 STDOUT terraform:  } 2025-06-02 16:52:40.277007 | orchestrator | 16:52:40.276 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-06-02 16:52:40.277062 | orchestrator | 16:52:40.277 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 16:52:40.277098 | orchestrator | 16:52:40.277 STDOUT terraform:  + device = (known after apply) 2025-06-02 16:52:40.277129 | orchestrator | 16:52:40.277 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.277158 | orchestrator | 16:52:40.277 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 16:52:40.277189 | orchestrator | 16:52:40.277 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.277218 | orchestrator | 16:52:40.277 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 16:52:40.277234 | orchestrator | 16:52:40.277 STDOUT terraform:  } 2025-06-02 16:52:40.277291 | orchestrator | 16:52:40.277 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-06-02 16:52:40.277343 | orchestrator | 16:52:40.277 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 16:52:40.277367 | orchestrator | 16:52:40.277 STDOUT terraform:  + device = (known after apply) 2025-06-02 16:52:40.277397 | orchestrator | 16:52:40.277 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.277428 | orchestrator | 16:52:40.277 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 16:52:40.277458 | orchestrator | 16:52:40.277 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.277487 | orchestrator | 16:52:40.277 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 16:52:40.277494 | orchestrator | 16:52:40.277 STDOUT terraform:  } 2025-06-02 16:52:40.277546 | orchestrator | 16:52:40.277 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-06-02 16:52:40.277594 | orchestrator | 16:52:40.277 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 16:52:40.277623 | orchestrator | 16:52:40.277 STDOUT terraform:  + device = (known after apply) 2025-06-02 16:52:40.277653 | orchestrator | 16:52:40.277 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.277684 | orchestrator | 16:52:40.277 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 16:52:40.277712 | orchestrator | 16:52:40.277 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.277741 | orchestrator | 16:52:40.277 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 16:52:40.277748 | orchestrator | 16:52:40.277 STDOUT terraform:  } 2025-06-02 16:52:40.277802 | orchestrator | 16:52:40.277 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-06-02 16:52:40.277852 | orchestrator | 16:52:40.277 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 16:52:40.277881 | orchestrator | 16:52:40.277 STDOUT terraform:  + device = (known after apply) 2025-06-02 16:52:40.277912 | orchestrator | 16:52:40.277 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.277939 | orchestrator | 16:52:40.277 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 16:52:40.277970 | orchestrator | 16:52:40.277 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.277999 | orchestrator | 16:52:40.277 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 16:52:40.278006 | orchestrator | 16:52:40.277 STDOUT terraform:  } 2025-06-02 16:52:40.278076 | orchestrator | 16:52:40.278 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-06-02 16:52:40.278122 | orchestrator | 16:52:40.278 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 16:52:40.278151 | orchestrator | 16:52:40.278 STDOUT terraform:  + device = (known after apply) 2025-06-02 16:52:40.278182 | orchestrator | 16:52:40.278 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.278200 | orchestrator | 16:52:40.278 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 16:52:40.278236 | orchestrator | 16:52:40.278 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.278298 | orchestrator | 16:52:40.278 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 16:52:40.278305 | orchestrator | 16:52:40.278 STDOUT terraform:  } 2025-06-02 16:52:40.278352 | orchestrator | 16:52:40.278 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-06-02 16:52:40.278401 | orchestrator | 16:52:40.278 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 16:52:40.278431 | orchestrator | 16:52:40.278 STDOUT terraform:  + device = (known after apply) 2025-06-02 16:52:40.278462 | orchestrator | 16:52:40.278 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.278489 | orchestrator | 16:52:40.278 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 16:52:40.278520 | orchestrator | 16:52:40.278 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.278551 | orchestrator | 16:52:40.278 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 16:52:40.278559 | orchestrator | 16:52:40.278 STDOUT terraform:  } 2025-06-02 16:52:40.278611 | orchestrator | 16:52:40.278 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-06-02 16:52:40.278659 | orchestrator | 16:52:40.278 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 16:52:40.278689 | orchestrator | 16:52:40.278 STDOUT terraform:  + device = (known after apply) 2025-06-02 16:52:40.278718 | orchestrator | 16:52:40.278 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.278747 | orchestrator | 16:52:40.278 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 16:52:40.278778 | orchestrator | 16:52:40.278 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.278806 | orchestrator | 16:52:40.278 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 16:52:40.278813 | orchestrator | 16:52:40.278 STDOUT terraform:  } 2025-06-02 16:52:40.278868 | orchestrator | 16:52:40.278 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-06-02 16:52:40.278915 | orchestrator | 16:52:40.278 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 16:52:40.278946 | orchestrator | 16:52:40.278 STDOUT terraform:  + device = (known after apply) 2025-06-02 16:52:40.278975 | orchestrator | 16:52:40.278 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.279006 | orchestrator | 16:52:40.278 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 16:52:40.279035 | orchestrator | 16:52:40.278 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.279064 | orchestrator | 16:52:40.279 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 16:52:40.279070 | orchestrator | 16:52:40.279 STDOUT terraform:  } 2025-06-02 16:52:40.279123 | orchestrator | 16:52:40.279 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-06-02 16:52:40.279176 | orchestrator | 16:52:40.279 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 16:52:40.295598 | orchestrator | 16:52:40.279 STDOUT terraform:  + device = (known after apply) 2025-06-02 16:52:40.297274 | orchestrator | 16:52:40.279 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.297282 | orchestrator | 16:52:40.279 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 16:52:40.297287 | orchestrator | 16:52:40.279 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.297291 | orchestrator | 16:52:40.279 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 16:52:40.297296 | orchestrator | 16:52:40.279 STDOUT terraform:  } 2025-06-02 16:52:40.297301 | orchestrator | 16:52:40.279 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-06-02 16:52:40.297307 | orchestrator | 16:52:40.279 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-06-02 16:52:40.297311 | orchestrator | 16:52:40.279 STDOUT terraform:  + fixed_ip = (known after apply) 2025-06-02 16:52:40.297315 | orchestrator | 16:52:40.279 STDOUT terraform:  + floating_ip = (known after apply) 2025-06-02 16:52:40.297319 | orchestrator | 16:52:40.279 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.297323 | orchestrator | 16:52:40.279 STDOUT terraform:  + port_id = (known after apply) 2025-06-02 16:52:40.297327 | orchestrator | 16:52:40.279 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.297331 | orchestrator | 16:52:40.279 STDOUT terraform:  } 2025-06-02 16:52:40.297335 | orchestrator | 16:52:40.279 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-06-02 16:52:40.297340 | orchestrator | 16:52:40.279 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-06-02 16:52:40.297344 | orchestrator | 16:52:40.279 STDOUT terraform:  + address = (known after apply) 2025-06-02 16:52:40.297348 | orchestrator | 16:52:40.279 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 16:52:40.297352 | orchestrator | 16:52:40.279 STDOUT terraform:  + dns_domain = (known after apply) 2025-06-02 16:52:40.297355 | orchestrator | 16:52:40.279 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 16:52:40.297359 | orchestrator | 16:52:40.279 STDOUT terraform:  + fixed_ip = (known after apply) 2025-06-02 16:52:40.297363 | orchestrator | 16:52:40.279 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.297367 | orchestrator | 16:52:40.279 STDOUT terraform:  + pool = "public" 2025-06-02 16:52:40.297373 | orchestrator | 16:52:40.279 STDOUT terraform:  + port_id = (known after apply) 2025-06-02 16:52:40.297376 | orchestrator | 16:52:40.279 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.297385 | orchestrator | 16:52:40.279 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 16:52:40.297389 | orchestrator | 16:52:40.279 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 16:52:40.297393 | orchestrator | 16:52:40.279 STDOUT terraform:  } 2025-06-02 16:52:40.297397 | orchestrator | 16:52:40.279 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-06-02 16:52:40.297410 | orchestrator | 16:52:40.279 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-06-02 16:52:40.297414 | orchestrator | 16:52:40.279 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 16:52:40.297418 | orchestrator | 16:52:40.279 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 16:52:40.297422 | orchestrator | 16:52:40.279 STDOUT terraform:  + availability_zone_hints = [ 2025-06-02 16:52:40.297426 | orchestrator | 16:52:40.280 STDOUT terraform:  + "nova", 2025-06-02 16:52:40.297430 | orchestrator | 16:52:40.280 STDOUT terraform:  ] 2025-06-02 16:52:40.297434 | orchestrator | 16:52:40.280 STDOUT terraform:  + dns_domain = (known after apply) 2025-06-02 16:52:40.297437 | orchestrator | 16:52:40.280 STDOUT terraform:  + external = (known after apply) 2025-06-02 16:52:40.297454 | orchestrator | 16:52:40.280 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.297458 | orchestrator | 16:52:40.280 STDOUT terraform:  + mtu = (known after apply) 2025-06-02 16:52:40.297462 | orchestrator | 16:52:40.280 STDOUT terraform:  + name = "net-testbed-management" 2025-06-02 16:52:40.297465 | orchestrator | 16:52:40.280 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 16:52:40.297469 | orchestrator | 16:52:40.280 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 16:52:40.297473 | orchestrator | 16:52:40.280 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.297476 | orchestrator | 16:52:40.280 STDOUT terraform:  + shared = (known after apply) 2025-06-02 16:52:40.297480 | orchestrator | 16:52:40.280 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 16:52:40.297484 | orchestrator | 16:52:40.280 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-06-02 16:52:40.297490 | orchestrator | 16:52:40.280 STDOUT terraform:  + segments (known after apply) 2025-06-02 16:52:40.297494 | orchestrator | 16:52:40.280 STDOUT terraform:  } 2025-06-02 16:52:40.297498 | orchestrator | 16:52:40.280 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-06-02 16:52:40.297501 | orchestrator | 16:52:40.280 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-06-02 16:52:40.297505 | orchestrator | 16:52:40.280 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 16:52:40.297509 | orchestrator | 16:52:40.280 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-02 16:52:40.297512 | orchestrator | 16:52:40.280 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-02 16:52:40.297516 | orchestrator | 16:52:40.280 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 16:52:40.297520 | orchestrator | 16:52:40.280 STDOUT terraform:  + device_id = (known after apply) 2025-06-02 16:52:40.297524 | orchestrator | 16:52:40.280 STDOUT terraform:  + device_owner = (known after apply) 2025-06-02 16:52:40.297528 | orchestrator | 16:52:40.280 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-02 16:52:40.297531 | orchestrator | 16:52:40.280 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 16:52:40.297539 | orchestrator | 16:52:40.280 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.297543 | orchestrator | 16:52:40.280 STDOUT terraform:  + mac_address = (known after apply) 2025-06-02 16:52:40.297547 | orchestrator | 16:52:40.280 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 16:52:40.297550 | orchestrator | 16:52:40.280 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 16:52:40.297554 | orchestrator | 16:52:40.280 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 16:52:40.297558 | orchestrator | 16:52:40.280 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.297561 | orchestrator | 16:52:40.280 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-02 16:52:40.297565 | orchestrator | 16:52:40.280 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 16:52:40.297569 | orchestrator | 16:52:40.281 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 16:52:40.297572 | orchestrator | 16:52:40.281 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-02 16:52:40.297576 | orchestrator | 16:52:40.281 STDOUT terraform:  } 2025-06-02 16:52:40.297580 | orchestrator | 16:52:40.281 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 16:52:40.297584 | orchestrator | 16:52:40.281 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-02 16:52:40.297587 | orchestrator | 16:52:40.281 STDOUT terraform:  } 2025-06-02 16:52:40.297591 | orchestrator | 16:52:40.281 STDOUT terraform:  + binding (known after apply) 2025-06-02 16:52:40.297595 | orchestrator | 16:52:40.281 STDOUT terraform:  + fixed_ip { 2025-06-02 16:52:40.297602 | orchestrator | 16:52:40.281 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-06-02 16:52:40.297606 | orchestrator | 16:52:40.281 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 16:52:40.297610 | orchestrator | 16:52:40.281 STDOUT terraform:  } 2025-06-02 16:52:40.297614 | orchestrator | 16:52:40.281 STDOUT terraform:  } 2025-06-02 16:52:40.297618 | orchestrator | 16:52:40.281 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-06-02 16:52:40.297621 | orchestrator | 16:52:40.281 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-02 16:52:40.297625 | orchestrator | 16:52:40.281 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 16:52:40.297629 | orchestrator | 16:52:40.281 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-02 16:52:40.297633 | orchestrator | 16:52:40.281 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-02 16:52:40.297636 | orchestrator | 16:52:40.281 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 16:52:40.297640 | orchestrator | 16:52:40.281 STDOUT terraform:  + device_id = (known after apply) 2025-06-02 16:52:40.297644 | orchestrator | 16:52:40.281 STDOUT terraform:  + device_owner = (known after apply) 2025-06-02 16:52:40.297648 | orchestrator | 16:52:40.281 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-02 16:52:40.297656 | orchestrator | 16:52:40.281 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 16:52:40.297659 | orchestrator | 16:52:40.281 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.297663 | orchestrator | 16:52:40.281 STDOUT terraform:  + mac_address = (known after apply) 2025-06-02 16:52:40.297667 | orchestrator | 16:52:40.281 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 16:52:40.297671 | orchestrator | 16:52:40.281 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 16:52:40.297674 | orchestrator | 16:52:40.281 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 16:52:40.297678 | orchestrator | 16:52:40.281 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.297682 | orchestrator | 16:52:40.281 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-02 16:52:40.297685 | orchestrator | 16:52:40.281 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 16:52:40.297689 | orchestrator | 16:52:40.281 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 16:52:40.297693 | orchestrator | 16:52:40.281 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-02 16:52:40.297696 | orchestrator | 16:52:40.281 STDOUT terraform:  } 2025-06-02 16:52:40.297700 | orchestrator | 16:52:40.281 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 16:52:40.297704 | orchestrator | 16:52:40.281 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-02 16:52:40.297708 | orchestrator | 16:52:40.281 STDOUT terraform:  } 2025-06-02 16:52:40.297711 | orchestrator | 16:52:40.281 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 16:52:40.297715 | orchestrator | 16:52:40.281 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-02 16:52:40.297719 | orchestrator | 16:52:40.281 STDOUT terraform:  } 2025-06-02 16:52:40.297722 | orchestrator | 16:52:40.281 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 16:52:40.297726 | orchestrator | 16:52:40.281 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-02 16:52:40.297730 | orchestrator | 16:52:40.281 STDOUT terraform:  } 2025-06-02 16:52:40.297734 | orchestrator | 16:52:40.281 STDOUT terraform:  + binding (known after apply) 2025-06-02 16:52:40.297737 | orchestrator | 16:52:40.281 STDOUT terraform:  + fixed_ip { 2025-06-02 16:52:40.297741 | orchestrator | 16:52:40.282 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-06-02 16:52:40.297745 | orchestrator | 16:52:40.282 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 16:52:40.297749 | orchestrator | 16:52:40.282 STDOUT terraform:  } 2025-06-02 16:52:40.297752 | orchestrator | 16:52:40.282 STDOUT terraform:  } 2025-06-02 16:52:40.297759 | orchestrator | 16:52:40.282 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-06-02 16:52:40.297763 | orchestrator | 16:52:40.282 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-02 16:52:40.297767 | orchestrator | 16:52:40.282 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 16:52:40.297771 | orchestrator | 16:52:40.282 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-02 16:52:40.297779 | orchestrator | 16:52:40.282 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-02 16:52:40.297786 | orchestrator | 16:52:40.282 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 16:52:40.297790 | orchestrator | 16:52:40.282 STDOUT terraform:  + device_id = (known after apply) 2025-06-02 16:52:40.297794 | orchestrator | 16:52:40.282 STDOUT terraform:  + device_owner = (known after apply) 2025-06-02 16:52:40.297800 | orchestrator | 16:52:40.282 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-02 16:52:40.297804 | orchestrator | 16:52:40.282 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 16:52:40.297807 | orchestrator | 16:52:40.282 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.297811 | orchestrator | 16:52:40.282 STDOUT terraform:  + mac_address = (known after apply) 2025-06-02 16:52:40.297815 | orchestrator | 16:52:40.282 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 16:52:40.297818 | orchestrator | 16:52:40.282 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 16:52:40.297822 | orchestrator | 16:52:40.282 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 16:52:40.297826 | orchestrator | 16:52:40.282 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.297830 | orchestrator | 16:52:40.282 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-02 16:52:40.297833 | orchestrator | 16:52:40.282 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 16:52:40.297837 | orchestrator | 16:52:40.282 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 16:52:40.297841 | orchestrator | 16:52:40.282 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-02 16:52:40.297844 | orchestrator | 16:52:40.282 STDOUT terraform:  } 2025-06-02 16:52:40.297848 | orchestrator | 16:52:40.282 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 16:52:40.297852 | orchestrator | 16:52:40.282 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-02 16:52:40.297856 | orchestrator | 16:52:40.282 STDOUT terraform:  } 2025-06-02 16:52:40.297860 | orchestrator | 16:52:40.282 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 16:52:40.297863 | orchestrator | 16:52:40.282 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-02 16:52:40.297867 | orchestrator | 16:52:40.282 STDOUT terraform:  } 2025-06-02 16:52:40.297871 | orchestrator | 16:52:40.282 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 16:52:40.297874 | orchestrator | 16:52:40.282 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-02 16:52:40.297878 | orchestrator | 16:52:40.282 STDOUT terraform:  } 2025-06-02 16:52:40.297882 | orchestrator | 16:52:40.282 STDOUT terraform:  + binding (known after apply) 2025-06-02 16:52:40.297886 | orchestrator | 16:52:40.282 STDOUT terraform:  + fixed_ip { 2025-06-02 16:52:40.297889 | orchestrator | 16:52:40.282 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-06-02 16:52:40.297893 | orchestrator | 16:52:40.282 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 16:52:40.297897 | orchestrator | 16:52:40.282 STDOUT terraform:  } 2025-06-02 16:52:40.297904 | orchestrator | 16:52:40.282 STDOUT terraform:  } 2025-06-02 16:52:40.297908 | orchestrator | 16:52:40.282 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-06-02 16:52:40.297912 | orchestrator | 16:52:40.283 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-02 16:52:40.297919 | orchestrator | 16:52:40.283 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 16:52:40.297923 | orchestrator | 16:52:40.283 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-02 16:52:40.297926 | orchestrator | 16:52:40.283 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-02 16:52:40.297930 | orchestrator | 16:52:40.283 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 16:52:40.297934 | orchestrator | 16:52:40.283 STDOUT terraform:  + device_id = (known after apply) 2025-06-02 16:52:40.297938 | orchestrator | 16:52:40.283 STDOUT terraform:  + device_owner = (known after apply) 2025-06-02 16:52:40.297941 | orchestrator | 16:52:40.283 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-02 16:52:40.297945 | orchestrator | 16:52:40.283 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 16:52:40.297952 | orchestrator | 16:52:40.283 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.297955 | orchestrator | 16:52:40.283 STDOUT terraform:  + mac_address = (known after apply) 2025-06-02 16:52:40.297959 | orchestrator | 16:52:40.283 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 16:52:40.297963 | orchestrator | 16:52:40.283 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 16:52:40.297967 | orchestrator | 16:52:40.283 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 16:52:40.297970 | orchestrator | 16:52:40.283 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.297974 | orchestrator | 16:52:40.283 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-02 16:52:40.297978 | orchestrator | 16:52:40.283 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 16:52:40.297981 | orchestrator | 16:52:40.283 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 16:52:40.297985 | orchestrator | 16:52:40.283 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-02 16:52:40.297989 | orchestrator | 16:52:40.283 STDOUT terraform:  } 2025-06-02 16:52:40.297992 | orchestrator | 16:52:40.283 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 16:52:40.297996 | orchestrator | 16:52:40.283 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-02 16:52:40.298000 | orchestrator | 16:52:40.283 STDOUT terraform:  } 2025-06-02 16:52:40.298004 | orchestrator | 16:52:40.283 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 16:52:40.298007 | orchestrator | 16:52:40.283 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-02 16:52:40.298029 | orchestrator | 16:52:40.283 STDOUT terraform:  } 2025-06-02 16:52:40.298034 | orchestrator | 16:52:40.283 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 16:52:40.298038 | orchestrator | 16:52:40.283 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-02 16:52:40.298055 | orchestrator | 16:52:40.283 STDOUT terraform:  } 2025-06-02 16:52:40.298059 | orchestrator | 16:52:40.283 STDOUT terraform:  + binding (known after apply) 2025-06-02 16:52:40.298062 | orchestrator | 16:52:40.283 STDOUT terraform:  + fixed_ip { 2025-06-02 16:52:40.298066 | orchestrator | 16:52:40.283 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-06-02 16:52:40.298070 | orchestrator | 16:52:40.283 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 16:52:40.298074 | orchestrator | 16:52:40.283 STDOUT terraform:  } 2025-06-02 16:52:40.298077 | orchestrator | 16:52:40.283 STDOUT terraform:  } 2025-06-02 16:52:40.298081 | orchestrator | 16:52:40.283 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-06-02 16:52:40.298085 | orchestrator | 16:52:40.283 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-02 16:52:40.298089 | orchestrator | 16:52:40.283 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 16:52:40.298092 | orchestrator | 16:52:40.283 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-02 16:52:40.298100 | orchestrator | 16:52:40.284 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-02 16:52:40.298104 | orchestrator | 16:52:40.284 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 16:52:40.298107 | orchestrator | 16:52:40.284 STDOUT terraform:  + device_id = (known after apply) 2025-06-02 16:52:40.298111 | orchestrator | 16:52:40.284 STDOUT terraform:  + device_owner = (known after apply) 2025-06-02 16:52:40.298115 | orchestrator | 16:52:40.284 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-02 16:52:40.298118 | orchestrator | 16:52:40.284 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 16:52:40.298122 | orchestrator | 16:52:40.284 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.298126 | orchestrator | 16:52:40.284 STDOUT terraform:  + mac_address = (known after apply) 2025-06-02 16:52:40.298129 | orchestrator | 16:52:40.284 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 16:52:40.298133 | orchestrator | 16:52:40.284 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 16:52:40.298137 | orchestrator | 16:52:40.284 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 16:52:40.298141 | orchestrator | 16:52:40.284 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.298144 | orchestrator | 16:52:40.284 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-02 16:52:40.298148 | orchestrator | 16:52:40.284 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 16:52:40.298152 | orchestrator | 16:52:40.284 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 16:52:40.298155 | orchestrator | 16:52:40.284 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-02 16:52:40.298159 | orchestrator | 16:52:40.284 STDOUT terraform:  } 2025-06-02 16:52:40.298163 | orchestrator | 16:52:40.284 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 16:52:40.298166 | orchestrator | 16:52:40.284 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-02 16:52:40.298174 | orchestrator | 16:52:40.284 STDOUT terraform:  } 2025-06-02 16:52:40.298178 | orchestrator | 16:52:40.284 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 16:52:40.298181 | orchestrator | 16:52:40.284 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-02 16:52:40.298185 | orchestrator | 16:52:40.284 STDOUT terraform:  } 2025-06-02 16:52:40.298189 | orchestrator | 16:52:40.284 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 16:52:40.298192 | orchestrator | 16:52:40.284 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-02 16:52:40.298196 | orchestrator | 16:52:40.284 STDOUT terraform:  } 2025-06-02 16:52:40.298200 | orchestrator | 16:52:40.284 STDOUT terraform:  + binding (known after apply) 2025-06-02 16:52:40.298204 | orchestrator | 16:52:40.284 STDOUT terraform:  + fixed_ip { 2025-06-02 16:52:40.298207 | orchestrator | 16:52:40.284 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-06-02 16:52:40.298211 | orchestrator | 16:52:40.284 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 16:52:40.298215 | orchestrator | 16:52:40.284 STDOUT terraform:  } 2025-06-02 16:52:40.298218 | orchestrator | 16:52:40.284 STDOUT terraform:  } 2025-06-02 16:52:40.298222 | orchestrator | 16:52:40.284 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-06-02 16:52:40.298226 | orchestrator | 16:52:40.284 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-02 16:52:40.298230 | orchestrator | 16:52:40.284 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 16:52:40.298233 | orchestrator | 16:52:40.284 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-02 16:52:40.298237 | orchestrator | 16:52:40.284 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-02 16:52:40.298244 | orchestrator | 16:52:40.284 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 16:52:40.298247 | orchestrator | 16:52:40.284 STDOUT terraform:  + device_id = (known after apply) 2025-06-02 16:52:40.298267 | orchestrator | 16:52:40.284 STDOUT terraform:  + device_owner = (known after apply) 2025-06-02 16:52:40.298271 | orchestrator | 16:52:40.284 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-02 16:52:40.298275 | orchestrator | 16:52:40.285 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 16:52:40.298279 | orchestrator | 16:52:40.285 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.298282 | orchestrator | 16:52:40.285 STDOUT terraform:  + mac_address = (known after apply) 2025-06-02 16:52:40.298286 | orchestrator | 16:52:40.285 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 16:52:40.298290 | orchestrator | 16:52:40.285 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 16:52:40.298293 | orchestrator | 16:52:40.285 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 16:52:40.298297 | orchestrator | 16:52:40.285 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.298304 | orchestrator | 16:52:40.285 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-02 16:52:40.298312 | orchestrator | 16:52:40.285 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 16:52:40.298316 | orchestrator | 16:52:40.285 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 16:52:40.298319 | orchestrator | 16:52:40.285 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-02 16:52:40.298323 | orchestrator | 16:52:40.285 STDOUT terraform:  } 2025-06-02 16:52:40.298327 | orchestrator | 16:52:40.285 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 16:52:40.298330 | orchestrator | 16:52:40.285 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-02 16:52:40.298334 | orchestrator | 16:52:40.285 STDOUT terraform:  } 2025-06-02 16:52:40.298338 | orchestrator | 16:52:40.285 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 16:52:40.298342 | orchestrator | 16:52:40.285 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-02 16:52:40.298345 | orchestrator | 16:52:40.285 STDOUT terraform:  } 2025-06-02 16:52:40.298349 | orchestrator | 16:52:40.285 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 16:52:40.298353 | orchestrator | 16:52:40.285 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-02 16:52:40.298356 | orchestrator | 16:52:40.285 STDOUT terraform:  } 2025-06-02 16:52:40.298360 | orchestrator | 16:52:40.285 STDOUT terraform:  + binding (known after apply) 2025-06-02 16:52:40.298364 | orchestrator | 16:52:40.285 STDOUT terraform:  + fixed_ip { 2025-06-02 16:52:40.298368 | orchestrator | 16:52:40.285 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-06-02 16:52:40.298371 | orchestrator | 16:52:40.285 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 16:52:40.298375 | orchestrator | 16:52:40.285 STDOUT terraform:  } 2025-06-02 16:52:40.298379 | orchestrator | 16:52:40.285 STDOUT terraform:  } 2025-06-02 16:52:40.298382 | orchestrator | 16:52:40.285 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-06-02 16:52:40.298386 | orchestrator | 16:52:40.285 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-02 16:52:40.298390 | orchestrator | 16:52:40.285 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 16:52:40.298394 | orchestrator | 16:52:40.285 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-02 16:52:40.298397 | orchestrator | 16:52:40.285 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-02 16:52:40.298401 | orchestrator | 16:52:40.285 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 16:52:40.298405 | orchestrator | 16:52:40.285 STDOUT terraform:  + device_id = (known after apply) 2025-06-02 16:52:40.298408 | orchestrator | 16:52:40.285 STDOUT terraform:  + device_owner = (known after apply) 2025-06-02 16:52:40.298412 | orchestrator | 16:52:40.285 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-02 16:52:40.298416 | orchestrator | 16:52:40.285 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 16:52:40.298423 | orchestrator | 16:52:40.285 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.298427 | orchestrator | 16:52:40.285 STDOUT terraform:  + mac_address = (known after apply) 2025-06-02 16:52:40.298433 | orchestrator | 16:52:40.286 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 16:52:40.298437 | orchestrator | 16:52:40.286 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 16:52:40.298441 | orchestrator | 16:52:40.286 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 16:52:40.298445 | orchestrator | 16:52:40.286 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.298448 | orchestrator | 16:52:40.286 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-02 16:52:40.298452 | orchestrator | 16:52:40.286 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 16:52:40.298458 | orchestrator | 16:52:40.286 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 16:52:40.298462 | orchestrator | 16:52:40.286 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-02 16:52:40.298466 | orchestrator | 16:52:40.286 STDOUT terraform:  } 2025-06-02 16:52:40.298469 | orchestrator | 16:52:40.286 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 16:52:40.298473 | orchestrator | 16:52:40.286 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-02 16:52:40.298477 | orchestrator | 16:52:40.286 STDOUT terraform:  } 2025-06-02 16:52:40.298481 | orchestrator | 16:52:40.286 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 16:52:40.298484 | orchestrator | 16:52:40.286 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-02 16:52:40.298488 | orchestrator | 16:52:40.286 STDOUT terraform:  } 2025-06-02 16:52:40.298492 | orchestrator | 16:52:40.286 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 16:52:40.298496 | orchestrator | 16:52:40.286 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-02 16:52:40.298499 | orchestrator | 16:52:40.286 STDOUT terraform:  } 2025-06-02 16:52:40.298503 | orchestrator | 16:52:40.286 STDOUT terraform:  + binding (known after apply) 2025-06-02 16:52:40.298507 | orchestrator | 16:52:40.286 STDOUT terraform:  + fixed_ip { 2025-06-02 16:52:40.298510 | orchestrator | 16:52:40.286 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-06-02 16:52:40.298514 | orchestrator | 16:52:40.286 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 16:52:40.298518 | orchestrator | 16:52:40.286 STDOUT terraform:  } 2025-06-02 16:52:40.298522 | orchestrator | 16:52:40.286 STDOUT terraform:  } 2025-06-02 16:52:40.298526 | orchestrator | 16:52:40.286 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-06-02 16:52:40.298529 | orchestrator | 16:52:40.287 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-06-02 16:52:40.298533 | orchestrator | 16:52:40.287 STDOUT terraform:  + force_destroy = false 2025-06-02 16:52:40.298537 | orchestrator | 16:52:40.287 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.298541 | orchestrator | 16:52:40.287 STDOUT terraform:  + port_id = (known after apply) 2025-06-02 16:52:40.298544 | orchestrator | 16:52:40.287 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.298548 | orchestrator | 16:52:40.287 STDOUT terraform:  + router_id = (known after apply) 2025-06-02 16:52:40.298555 | orchestrator | 16:52:40.287 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 16:52:40.298558 | orchestrator | 16:52:40.287 STDOUT terraform:  } 2025-06-02 16:52:40.298562 | orchestrator | 16:52:40.287 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-06-02 16:52:40.298566 | orchestrator | 16:52:40.287 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-06-02 16:52:40.298570 | orchestrator | 16:52:40.288 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 16:52:40.298573 | orchestrator | 16:52:40.288 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 16:52:40.298580 | orchestrator | 16:52:40.288 STDOUT terraform:  + availability_zone_hints = [ 2025-06-02 16:52:40.298584 | orchestrator | 16:52:40.288 STDOUT terraform:  + "nova", 2025-06-02 16:52:40.298587 | orchestrator | 16:52:40.288 STDOUT terraform:  ] 2025-06-02 16:52:40.298595 | orchestrator | 16:52:40.288 STDOUT terraform:  + distributed = (known after apply) 2025-06-02 16:52:40.298599 | orchestrator | 16:52:40.288 STDOUT terraform:  + enable_snat = (known after apply) 2025-06-02 16:52:40.298603 | orchestrator | 16:52:40.288 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-06-02 16:52:40.298606 | orchestrator | 16:52:40.288 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.298610 | orchestrator | 16:52:40.288 STDOUT terraform:  + name = "testbed" 2025-06-02 16:52:40.298614 | orchestrator | 16:52:40.288 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.298617 | orchestrator | 16:52:40.288 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 16:52:40.298621 | orchestrator | 16:52:40.288 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-06-02 16:52:40.298625 | orchestrator | 16:52:40.288 STDOUT terraform:  } 2025-06-02 16:52:40.298629 | orchestrator | 16:52:40.288 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-06-02 16:52:40.298633 | orchestrator | 16:52:40.288 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-06-02 16:52:40.298637 | orchestrator | 16:52:40.288 STDOUT terraform:  + description = "ssh" 2025-06-02 16:52:40.298640 | orchestrator | 16:52:40.288 STDOUT terraform:  + direction = "ingress" 2025-06-02 16:52:40.298644 | orchestrator | 16:52:40.288 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 16:52:40.298648 | orchestrator | 16:52:40.288 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.298652 | orchestrator | 16:52:40.288 STDOUT terraform:  + port_range_max = 22 2025-06-02 16:52:40.298655 | orchestrator | 16:52:40.288 STDOUT terraform:  + port_range_min = 22 2025-06-02 16:52:40.298659 | orchestrator | 16:52:40.288 STDOUT terraform:  + protocol = "tcp" 2025-06-02 16:52:40.298663 | orchestrator | 16:52:40.288 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.298666 | orchestrator | 16:52:40.288 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 16:52:40.298670 | orchestrator | 16:52:40.288 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-02 16:52:40.298677 | orchestrator | 16:52:40.288 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 16:52:40.298680 | orchestrator | 16:52:40.288 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 16:52:40.298684 | orchestrator | 16:52:40.288 STDOUT terraform:  } 2025-06-02 16:52:40.298688 | orchestrator | 16:52:40.288 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-06-02 16:52:40.298692 | orchestrator | 16:52:40.288 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-06-02 16:52:40.298695 | orchestrator | 16:52:40.288 STDOUT terraform:  + description = "wireguard" 2025-06-02 16:52:40.298699 | orchestrator | 16:52:40.288 STDOUT terraform:  + direction = "ingress" 2025-06-02 16:52:40.298703 | orchestrator | 16:52:40.288 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 16:52:40.298707 | orchestrator | 16:52:40.288 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.298710 | orchestrator | 16:52:40.288 STDOUT terraform:  + port_range_max = 51820 2025-06-02 16:52:40.298714 | orchestrator | 16:52:40.288 STDOUT terraform:  + port_range_min = 51820 2025-06-02 16:52:40.298718 | orchestrator | 16:52:40.288 STDOUT terraform:  + protocol = "udp" 2025-06-02 16:52:40.298721 | orchestrator | 16:52:40.288 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.298725 | orchestrator | 16:52:40.289 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 16:52:40.298732 | orchestrator | 16:52:40.289 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-02 16:52:40.298736 | orchestrator | 16:52:40.289 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 16:52:40.298739 | orchestrator | 16:52:40.289 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 16:52:40.298746 | orchestrator | 16:52:40.289 STDOUT terraform:  } 2025-06-02 16:52:40.298750 | orchestrator | 16:52:40.289 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-06-02 16:52:40.298754 | orchestrator | 16:52:40.289 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-06-02 16:52:40.298757 | orchestrator | 16:52:40.289 STDOUT terraform:  + direction = "ingress" 2025-06-02 16:52:40.298761 | orchestrator | 16:52:40.289 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 16:52:40.298769 | orchestrator | 16:52:40.289 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.298773 | orchestrator | 16:52:40.289 STDOUT terraform:  + protocol = "tcp" 2025-06-02 16:52:40.298777 | orchestrator | 16:52:40.289 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.298780 | orchestrator | 16:52:40.289 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 16:52:40.298784 | orchestrator | 16:52:40.289 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-06-02 16:52:40.298788 | orchestrator | 16:52:40.289 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 16:52:40.298791 | orchestrator | 16:52:40.289 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 16:52:40.298798 | orchestrator | 16:52:40.289 STDOUT terraform:  } 2025-06-02 16:52:40.298802 | orchestrator | 16:52:40.289 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-06-02 16:52:40.298806 | orchestrator | 16:52:40.289 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-06-02 16:52:40.298810 | orchestrator | 16:52:40.289 STDOUT terraform:  + direction = "ingress" 2025-06-02 16:52:40.298813 | orchestrator | 16:52:40.289 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 16:52:40.298817 | orchestrator | 16:52:40.289 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.298821 | orchestrator | 16:52:40.289 STDOUT terraform:  + protocol = "udp" 2025-06-02 16:52:40.298825 | orchestrator | 16:52:40.289 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.298828 | orchestrator | 16:52:40.289 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 16:52:40.298832 | orchestrator | 16:52:40.289 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-06-02 16:52:40.298836 | orchestrator | 16:52:40.289 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 16:52:40.298839 | orchestrator | 16:52:40.289 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 16:52:40.298843 | orchestrator | 16:52:40.289 STDOUT terraform:  } 2025-06-02 16:52:40.298847 | orchestrator | 16:52:40.289 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-06-02 16:52:40.298851 | orchestrator | 16:52:40.289 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-06-02 16:52:40.298854 | orchestrator | 16:52:40.289 STDOUT terraform:  + direction = "ingress" 2025-06-02 16:52:40.298858 | orchestrator | 16:52:40.289 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 16:52:40.298862 | orchestrator | 16:52:40.289 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.298865 | orchestrator | 16:52:40.289 STDOUT terraform:  + protocol = "icmp" 2025-06-02 16:52:40.298869 | orchestrator | 16:52:40.289 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.298873 | orchestrator | 16:52:40.289 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 16:52:40.298879 | orchestrator | 16:52:40.289 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-02 16:52:40.298883 | orchestrator | 16:52:40.289 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 16:52:40.298887 | orchestrator | 16:52:40.290 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 16:52:40.298890 | orchestrator | 16:52:40.290 STDOUT terraform:  } 2025-06-02 16:52:40.298894 | orchestrator | 16:52:40.290 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-06-02 16:52:40.298898 | orchestrator | 16:52:40.290 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-06-02 16:52:40.298902 | orchestrator | 16:52:40.290 STDOUT terraform:  + direction = "ingress" 2025-06-02 16:52:40.298906 | orchestrator | 16:52:40.290 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 16:52:40.298913 | orchestrator | 16:52:40.290 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.298919 | orchestrator | 16:52:40.290 STDOUT terraform:  + protocol = "tcp" 2025-06-02 16:52:40.298923 | orchestrator | 16:52:40.290 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.298927 | orchestrator | 16:52:40.290 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 16:52:40.298931 | orchestrator | 16:52:40.290 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-02 16:52:40.298934 | orchestrator | 16:52:40.290 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 16:52:40.298938 | orchestrator | 16:52:40.290 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 16:52:40.298942 | orchestrator | 16:52:40.290 STDOUT terraform:  } 2025-06-02 16:52:40.298945 | orchestrator | 16:52:40.290 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-06-02 16:52:40.298949 | orchestrator | 16:52:40.290 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-06-02 16:52:40.298953 | orchestrator | 16:52:40.290 STDOUT terraform:  + direction = "ingress" 2025-06-02 16:52:40.298957 | orchestrator | 16:52:40.290 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 16:52:40.298960 | orchestrator | 16:52:40.290 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.298964 | orchestrator | 16:52:40.290 STDOUT terraform:  + protocol = "udp" 2025-06-02 16:52:40.298968 | orchestrator | 16:52:40.290 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.298972 | orchestrator | 16:52:40.290 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 16:52:40.298975 | orchestrator | 16:52:40.290 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-02 16:52:40.298979 | orchestrator | 16:52:40.290 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 16:52:40.298983 | orchestrator | 16:52:40.290 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 16:52:40.298986 | orchestrator | 16:52:40.290 STDOUT terraform:  } 2025-06-02 16:52:40.298990 | orchestrator | 16:52:40.290 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-06-02 16:52:40.298994 | orchestrator | 16:52:40.290 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-06-02 16:52:40.298998 | orchestrator | 16:52:40.290 STDOUT terraform:  + direction = "ingress" 2025-06-02 16:52:40.299001 | orchestrator | 16:52:40.290 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 16:52:40.299005 | orchestrator | 16:52:40.290 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.299009 | orchestrator | 16:52:40.290 STDOUT terraform:  + protocol = "icmp" 2025-06-02 16:52:40.299012 | orchestrator | 16:52:40.290 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.299016 | orchestrator | 16:52:40.290 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 16:52:40.299020 | orchestrator | 16:52:40.290 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-02 16:52:40.299026 | orchestrator | 16:52:40.290 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 16:52:40.299033 | orchestrator | 16:52:40.290 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 16:52:40.299037 | orchestrator | 16:52:40.291 STDOUT terraform:  } 2025-06-02 16:52:40.299041 | orchestrator | 16:52:40.291 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-06-02 16:52:40.299044 | orchestrator | 16:52:40.291 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-06-02 16:52:40.299048 | orchestrator | 16:52:40.291 STDOUT terraform:  + description = "vrrp" 2025-06-02 16:52:40.299052 | orchestrator | 16:52:40.291 STDOUT terraform:  + direction = "ingress" 2025-06-02 16:52:40.299056 | orchestrator | 16:52:40.291 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 16:52:40.299059 | orchestrator | 16:52:40.291 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.299066 | orchestrator | 16:52:40.291 STDOUT terraform:  + protocol = "112" 2025-06-02 16:52:40.299070 | orchestrator | 16:52:40.291 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.299074 | orchestrator | 16:52:40.291 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 16:52:40.299077 | orchestrator | 16:52:40.291 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-02 16:52:40.299081 | orchestrator | 16:52:40.291 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 16:52:40.299085 | orchestrator | 16:52:40.291 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 16:52:40.299088 | orchestrator | 16:52:40.291 STDOUT terraform:  } 2025-06-02 16:52:40.299092 | orchestrator | 16:52:40.291 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-06-02 16:52:40.299096 | orchestrator | 16:52:40.291 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-06-02 16:52:40.299100 | orchestrator | 16:52:40.291 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 16:52:40.299104 | orchestrator | 16:52:40.291 STDOUT terraform:  + description = "management security group" 2025-06-02 16:52:40.299107 | orchestrator | 16:52:40.291 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.299111 | orchestrator | 16:52:40.291 STDOUT terraform:  + name = "testbed-management" 2025-06-02 16:52:40.299115 | orchestrator | 16:52:40.291 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.299119 | orchestrator | 16:52:40.291 STDOUT terraform:  + stateful = (known after apply) 2025-06-02 16:52:40.299122 | orchestrator | 16:52:40.291 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 16:52:40.299126 | orchestrator | 16:52:40.291 STDOUT terraform:  } 2025-06-02 16:52:40.299130 | orchestrator | 16:52:40.291 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-06-02 16:52:40.299134 | orchestrator | 16:52:40.291 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-06-02 16:52:40.299137 | orchestrator | 16:52:40.291 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 16:52:40.299141 | orchestrator | 16:52:40.291 STDOUT terraform:  + description = "node security group" 2025-06-02 16:52:40.299149 | orchestrator | 16:52:40.291 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.299152 | orchestrator | 16:52:40.291 STDOUT terraform:  + name = "testbed-node" 2025-06-02 16:52:40.299156 | orchestrator | 16:52:40.291 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.299160 | orchestrator | 16:52:40.291 STDOUT terraform:  + stateful = (known after apply) 2025-06-02 16:52:40.299163 | orchestrator | 16:52:40.292 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 16:52:40.299167 | orchestrator | 16:52:40.292 STDOUT terraform:  } 2025-06-02 16:52:40.299171 | orchestrator | 16:52:40.292 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-06-02 16:52:40.299175 | orchestrator | 16:52:40.292 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-06-02 16:52:40.299183 | orchestrator | 16:52:40.292 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 16:52:40.299186 | orchestrator | 16:52:40.292 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-06-02 16:52:40.299190 | orchestrator | 16:52:40.292 STDOUT terraform:  + dns_nameservers = [ 2025-06-02 16:52:40.299194 | orchestrator | 16:52:40.292 STDOUT terraform:  + "8.8.8.8", 2025-06-02 16:52:40.299198 | orchestrator | 16:52:40.292 STDOUT terraform:  + "9.9.9.9", 2025-06-02 16:52:40.299201 | orchestrator | 16:52:40.292 STDOUT terraform:  ] 2025-06-02 16:52:40.299205 | orchestrator | 16:52:40.292 STDOUT terraform:  + enable_dhcp = true 2025-06-02 16:52:40.299209 | orchestrator | 16:52:40.292 STDOUT terraform:  + gateway_ip = (known after apply) 2025-06-02 16:52:40.299212 | orchestrator | 16:52:40.292 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.299216 | orchestrator | 16:52:40.292 STDOUT terraform:  + ip_version = 4 2025-06-02 16:52:40.299220 | orchestrator | 16:52:40.292 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-06-02 16:52:40.299224 | orchestrator | 16:52:40.292 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-06-02 16:52:40.299227 | orchestrator | 16:52:40.292 STDOUT terraform:  + name = "subnet-testbed-management" 2025-06-02 16:52:40.299231 | orchestrator | 16:52:40.292 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 16:52:40.299235 | orchestrator | 16:52:40.292 STDOUT terraform:  + no_gateway = false 2025-06-02 16:52:40.299238 | orchestrator | 16:52:40.292 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:40.299242 | orchestrator | 16:52:40.292 STDOUT terraform:  + service_types = (known after apply) 2025-06-02 16:52:40.299246 | orchestrator | 16:52:40.292 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 16:52:40.299278 | orchestrator | 16:52:40.292 STDOUT terraform:  + allocation_pool { 2025-06-02 16:52:40.299282 | orchestrator | 16:52:40.292 STDOUT terraform:  + end = "192.168.31.250" 2025-06-02 16:52:40.299286 | orchestrator | 16:52:40.292 STDOUT terraform:  + start = "192.168.31.200" 2025-06-02 16:52:40.299290 | orchestrator | 16:52:40.292 STDOUT terraform:  } 2025-06-02 16:52:40.299294 | orchestrator | 16:52:40.292 STDOUT terraform:  } 2025-06-02 16:52:40.299302 | orchestrator | 16:52:40.292 STDOUT terraform:  # terraform_data.image will be created 2025-06-02 16:52:40.299333 | orchestrator | 16:52:40.292 STDOUT terraform:  + resource "terraform_data" "image" { 2025-06-02 16:52:40.299338 | orchestrator | 16:52:40.292 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.299342 | orchestrator | 16:52:40.292 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-06-02 16:52:40.299345 | orchestrator | 16:52:40.292 STDOUT terraform:  + output = (known after apply) 2025-06-02 16:52:40.299349 | orchestrator | 16:52:40.292 STDOUT terraform:  } 2025-06-02 16:52:40.299353 | orchestrator | 16:52:40.292 STDOUT terraform:  # terraform_data.image_node will be created 2025-06-02 16:52:40.299357 | orchestrator | 16:52:40.292 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-06-02 16:52:40.299360 | orchestrator | 16:52:40.292 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:40.299364 | orchestrator | 16:52:40.292 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-06-02 16:52:40.299368 | orchestrator | 16:52:40.292 STDOUT terraform:  + output = (known after apply) 2025-06-02 16:52:40.299371 | orchestrator | 16:52:40.292 STDOUT terraform:  } 2025-06-02 16:52:40.299375 | orchestrator | 16:52:40.292 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-06-02 16:52:40.299379 | orchestrator | 16:52:40.292 STDOUT terraform: Changes to Outputs: 2025-06-02 16:52:40.299383 | orchestrator | 16:52:40.292 STDOUT terraform:  + manager_address = (sensitive value) 2025-06-02 16:52:40.299386 | orchestrator | 16:52:40.292 STDOUT terraform:  + private_key = (sensitive value) 2025-06-02 16:52:40.495224 | orchestrator | 16:52:40.494 STDOUT terraform: terraform_data.image_node: Creating... 2025-06-02 16:52:40.495355 | orchestrator | 16:52:40.495 STDOUT terraform: terraform_data.image: Creating... 2025-06-02 16:52:40.495385 | orchestrator | 16:52:40.495 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=ebd6f7d1-feee-3b4c-086c-018b9c548bb0] 2025-06-02 16:52:40.495400 | orchestrator | 16:52:40.495 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=8219eae2-9e75-0bc0-7aa8-f5b74d4aedb8] 2025-06-02 16:52:40.513775 | orchestrator | 16:52:40.513 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-06-02 16:52:40.514330 | orchestrator | 16:52:40.514 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-06-02 16:52:40.523921 | orchestrator | 16:52:40.522 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-06-02 16:52:40.523970 | orchestrator | 16:52:40.523 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-06-02 16:52:40.523975 | orchestrator | 16:52:40.523 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-06-02 16:52:40.524349 | orchestrator | 16:52:40.524 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-06-02 16:52:40.525110 | orchestrator | 16:52:40.524 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-06-02 16:52:40.528386 | orchestrator | 16:52:40.528 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-06-02 16:52:40.529707 | orchestrator | 16:52:40.529 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-06-02 16:52:40.531784 | orchestrator | 16:52:40.531 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-06-02 16:52:40.943095 | orchestrator | 16:52:40.942 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-06-02 16:52:40.950368 | orchestrator | 16:52:40.950 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-06-02 16:52:40.957205 | orchestrator | 16:52:40.957 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-06-02 16:52:40.961194 | orchestrator | 16:52:40.960 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-06-02 16:52:40.996054 | orchestrator | 16:52:40.995 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2025-06-02 16:52:41.004789 | orchestrator | 16:52:41.004 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-06-02 16:52:46.522917 | orchestrator | 16:52:46.522 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=01a342b8-9eff-4233-b317-eef6ef8742f4] 2025-06-02 16:52:46.537616 | orchestrator | 16:52:46.536 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-06-02 16:52:50.525303 | orchestrator | 16:52:50.524 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-06-02 16:52:50.525413 | orchestrator | 16:52:50.525 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-06-02 16:52:50.525441 | orchestrator | 16:52:50.525 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-06-02 16:52:50.525454 | orchestrator | 16:52:50.525 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-06-02 16:52:50.529658 | orchestrator | 16:52:50.529 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-06-02 16:52:50.533123 | orchestrator | 16:52:50.533 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-06-02 16:52:50.951303 | orchestrator | 16:52:50.950 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-06-02 16:52:50.962547 | orchestrator | 16:52:50.962 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-06-02 16:52:51.005043 | orchestrator | 16:52:51.004 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-06-02 16:52:51.090090 | orchestrator | 16:52:51.089 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 10s [id=37a5ef51-3790-4474-9294-da6668d88e33] 2025-06-02 16:52:51.100556 | orchestrator | 16:52:51.100 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 10s [id=6f5db02e-386c-41b9-ae07-b7cce6e0964a] 2025-06-02 16:52:51.100603 | orchestrator | 16:52:51.100 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-06-02 16:52:51.102359 | orchestrator | 16:52:51.102 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 10s [id=abb01d95-8fd4-488e-8b6c-7cb2a7271361] 2025-06-02 16:52:51.106468 | orchestrator | 16:52:51.106 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-06-02 16:52:51.112064 | orchestrator | 16:52:51.111 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-06-02 16:52:51.114822 | orchestrator | 16:52:51.114 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 10s [id=cc6b7f8a-a299-449d-8912-3815da19ff1f] 2025-06-02 16:52:51.120976 | orchestrator | 16:52:51.120 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-06-02 16:52:51.124524 | orchestrator | 16:52:51.124 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 10s [id=5d913f80-ed99-4f7f-af77-a272e71d6767] 2025-06-02 16:52:51.129177 | orchestrator | 16:52:51.129 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-06-02 16:52:51.140249 | orchestrator | 16:52:51.140 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 10s [id=8b34934e-11eb-4c36-8207-511a42fe0f38] 2025-06-02 16:52:51.146485 | orchestrator | 16:52:51.146 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-06-02 16:52:51.191654 | orchestrator | 16:52:51.191 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 10s [id=d22e3547-dc50-4b67-b48e-5886da7d5148] 2025-06-02 16:52:51.194138 | orchestrator | 16:52:51.193 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 10s [id=f15aa92f-a864-46a7-a446-d151182076d1] 2025-06-02 16:52:51.206047 | orchestrator | 16:52:51.205 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 10s [id=fb369b5e-a271-4fa4-9f85-1311171daecb] 2025-06-02 16:52:51.212408 | orchestrator | 16:52:51.212 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-06-02 16:52:51.212529 | orchestrator | 16:52:51.212 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-06-02 16:52:51.218098 | orchestrator | 16:52:51.217 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-06-02 16:52:51.221533 | orchestrator | 16:52:51.221 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=cf929014b4a3c61264e9d7250d4288764a2f4158] 2025-06-02 16:52:51.222613 | orchestrator | 16:52:51.222 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=810af49c3f9364f0bc2c5c940a2631ba99609971] 2025-06-02 16:52:56.542295 | orchestrator | 16:52:56.540 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-06-02 16:52:56.839738 | orchestrator | 16:52:56.838 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 10s [id=7ce206cb-87d4-44fa-8b19-ffddc5f2b300] 2025-06-02 16:52:56.941638 | orchestrator | 16:52:56.940 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 6s [id=63aefea3-fb1b-4daf-bc18-7e8147abe12e] 2025-06-02 16:52:56.949080 | orchestrator | 16:52:56.948 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-06-02 16:53:01.102427 | orchestrator | 16:53:01.101 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-06-02 16:53:01.107508 | orchestrator | 16:53:01.107 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-06-02 16:53:01.112983 | orchestrator | 16:53:01.112 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-06-02 16:53:01.121180 | orchestrator | 16:53:01.121 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-06-02 16:53:01.130441 | orchestrator | 16:53:01.130 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-06-02 16:53:01.148054 | orchestrator | 16:53:01.147 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-06-02 16:53:01.442359 | orchestrator | 16:53:01.438 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 10s [id=9ed89d00-d2b1-4316-9e61-ba744145484e] 2025-06-02 16:53:01.459628 | orchestrator | 16:53:01.459 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 10s [id=33e5408c-eac4-45cf-8284-ea43471071f8] 2025-06-02 16:53:01.482617 | orchestrator | 16:53:01.482 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 10s [id=dc2aa846-a38d-43a1-9fee-c1088582d602] 2025-06-02 16:53:01.500552 | orchestrator | 16:53:01.499 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 10s [id=719d5228-8def-49aa-934d-4d9ae9a2b478] 2025-06-02 16:53:01.511872 | orchestrator | 16:53:01.511 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 11s [id=d916e83a-af5b-4ece-a73c-3cfc7c74b767] 2025-06-02 16:53:01.512634 | orchestrator | 16:53:01.512 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 11s [id=833053d5-10c0-4334-a6d6-8a8f09775a72] 2025-06-02 16:53:04.451322 | orchestrator | 16:53:04.450 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 7s [id=762ef103-604e-47b1-b998-e46d2b9e3cca] 2025-06-02 16:53:04.457568 | orchestrator | 16:53:04.457 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-06-02 16:53:04.458733 | orchestrator | 16:53:04.458 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-06-02 16:53:04.459419 | orchestrator | 16:53:04.459 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-06-02 16:53:04.660037 | orchestrator | 16:53:04.659 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=29aad04d-d846-4af8-885f-fc57a7f16f84] 2025-06-02 16:53:04.679887 | orchestrator | 16:53:04.679 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-06-02 16:53:04.683863 | orchestrator | 16:53:04.683 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-06-02 16:53:04.688340 | orchestrator | 16:53:04.688 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-06-02 16:53:04.688751 | orchestrator | 16:53:04.688 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-06-02 16:53:04.689574 | orchestrator | 16:53:04.689 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-06-02 16:53:04.694923 | orchestrator | 16:53:04.694 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-06-02 16:53:04.695056 | orchestrator | 16:53:04.694 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-06-02 16:53:04.695725 | orchestrator | 16:53:04.695 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-06-02 16:53:04.863128 | orchestrator | 16:53:04.862 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=87a7deb9-88cd-4149-a312-aae45bd98850] 2025-06-02 16:53:04.879499 | orchestrator | 16:53:04.879 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-06-02 16:53:05.018659 | orchestrator | 16:53:05.018 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=c4009dfb-c433-465f-98dd-6d31db83cdc3] 2025-06-02 16:53:05.034111 | orchestrator | 16:53:05.033 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-06-02 16:53:05.151544 | orchestrator | 16:53:05.151 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=a636c497-03c0-41c4-bbb8-b15773c108ce] 2025-06-02 16:53:05.161418 | orchestrator | 16:53:05.161 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=3394059c-93f2-402a-ba73-79e4cc0e0c8a] 2025-06-02 16:53:05.162745 | orchestrator | 16:53:05.162 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-06-02 16:53:05.178188 | orchestrator | 16:53:05.178 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-06-02 16:53:05.385095 | orchestrator | 16:53:05.384 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=22198ed5-085f-402b-afb5-c7402f2f6098] 2025-06-02 16:53:05.392843 | orchestrator | 16:53:05.392 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-06-02 16:53:05.529022 | orchestrator | 16:53:05.528 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=22a714b3-edab-48da-9aa5-33b94c0c3c88] 2025-06-02 16:53:05.538893 | orchestrator | 16:53:05.538 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-06-02 16:53:05.583008 | orchestrator | 16:53:05.582 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=f45d3e24-28bb-489a-8a56-8822ffc7f846] 2025-06-02 16:53:05.590204 | orchestrator | 16:53:05.590 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-06-02 16:53:05.784350 | orchestrator | 16:53:05.783 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=d7cc686c-b4e9-4965-afd2-722bc9f55c4b] 2025-06-02 16:53:05.795590 | orchestrator | 16:53:05.795 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-06-02 16:53:06.137093 | orchestrator | 16:53:06.136 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=e7181d42-8a82-4e78-8ad5-1d987be6e871] 2025-06-02 16:53:06.460030 | orchestrator | 16:53:06.459 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=83e72cf2-177d-4b61-abf3-8bc921c7bc1b] 2025-06-02 16:53:10.331522 | orchestrator | 16:53:10.331 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 5s [id=b17f1179-1fa7-4391-bdc4-2ff0411d2b7a] 2025-06-02 16:53:10.333139 | orchestrator | 16:53:10.332 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 5s [id=cf0868cb-03de-4e30-9fcd-a0cfa2de26ac] 2025-06-02 16:53:10.439067 | orchestrator | 16:53:10.438 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 5s [id=92c0feff-a6eb-469d-a769-a05ce97eb8c7] 2025-06-02 16:53:10.558430 | orchestrator | 16:53:10.558 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 6s [id=223cbdff-3c00-4dd7-81ac-7f803c980909] 2025-06-02 16:53:10.802185 | orchestrator | 16:53:10.801 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 6s [id=78d3d585-6d0c-41f4-a168-75cc6ca7ca6e] 2025-06-02 16:53:10.828100 | orchestrator | 16:53:10.827 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 6s [id=2542e671-1e9b-4381-ba2e-7866f772209b] 2025-06-02 16:53:10.840130 | orchestrator | 16:53:10.839 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 6s [id=15797291-dc64-4afc-8ff4-5a6296a82aff] 2025-06-02 16:53:12.153885 | orchestrator | 16:53:12.153 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 8s [id=03fcb717-5f8b-4236-96c3-8c87accec606] 2025-06-02 16:53:12.181240 | orchestrator | 16:53:12.179 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-06-02 16:53:12.191576 | orchestrator | 16:53:12.191 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-06-02 16:53:12.192607 | orchestrator | 16:53:12.192 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-06-02 16:53:12.211102 | orchestrator | 16:53:12.210 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-06-02 16:53:12.212962 | orchestrator | 16:53:12.212 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-06-02 16:53:12.222828 | orchestrator | 16:53:12.222 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-06-02 16:53:12.225085 | orchestrator | 16:53:12.224 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-06-02 16:53:19.328186 | orchestrator | 16:53:19.326 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 7s [id=655ed976-82d4-4e62-905e-892b7f6e7a48] 2025-06-02 16:53:19.334530 | orchestrator | 16:53:19.334 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-06-02 16:53:19.342486 | orchestrator | 16:53:19.342 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-06-02 16:53:19.343479 | orchestrator | 16:53:19.343 STDOUT terraform: local_file.inventory: Creating... 2025-06-02 16:53:19.346757 | orchestrator | 16:53:19.346 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=c9228e1a0d11b70710541785e35a0f4f5ce10c4b] 2025-06-02 16:53:19.348682 | orchestrator | 16:53:19.348 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=b4610271ab745dd3a0cb9639b3dce398eab41466] 2025-06-02 16:53:20.275467 | orchestrator | 16:53:20.274 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=655ed976-82d4-4e62-905e-892b7f6e7a48] 2025-06-02 16:53:22.193134 | orchestrator | 16:53:22.192 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-06-02 16:53:22.194263 | orchestrator | 16:53:22.194 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-06-02 16:53:22.212382 | orchestrator | 16:53:22.212 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-06-02 16:53:22.214630 | orchestrator | 16:53:22.214 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-06-02 16:53:22.223875 | orchestrator | 16:53:22.223 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-06-02 16:53:22.226239 | orchestrator | 16:53:22.226 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-06-02 16:53:32.193251 | orchestrator | 16:53:32.192 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-06-02 16:53:32.195266 | orchestrator | 16:53:32.195 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-06-02 16:53:32.213696 | orchestrator | 16:53:32.213 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-06-02 16:53:32.215810 | orchestrator | 16:53:32.215 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-06-02 16:53:32.224983 | orchestrator | 16:53:32.224 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-06-02 16:53:32.227356 | orchestrator | 16:53:32.227 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-06-02 16:53:32.836409 | orchestrator | 16:53:32.835 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 21s [id=04c6c6f1-9af5-4b66-ae90-d0f9eb1891e3] 2025-06-02 16:53:32.926988 | orchestrator | 16:53:32.926 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 21s [id=8920940a-8caf-4088-8329-52fabc6a16ee] 2025-06-02 16:53:33.251497 | orchestrator | 16:53:33.251 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 21s [id=f670e0ca-f607-4ead-aa7f-6cae47991542] 2025-06-02 16:53:42.196348 | orchestrator | 16:53:42.195 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-06-02 16:53:42.217040 | orchestrator | 16:53:42.216 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-06-02 16:53:42.228809 | orchestrator | 16:53:42.228 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-06-02 16:53:42.678599 | orchestrator | 16:53:42.677 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=f55f9b75-ede6-45ec-ba73-e698d2579ebc] 2025-06-02 16:53:42.698049 | orchestrator | 16:53:42.697 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=cdb86c1c-1831-44b7-b92f-418079a02acc] 2025-06-02 16:53:42.922835 | orchestrator | 16:53:42.922 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=2407b0aa-2a97-4f6d-b632-c22fdcb681a0] 2025-06-02 16:53:42.947562 | orchestrator | 16:53:42.947 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-06-02 16:53:42.959707 | orchestrator | 16:53:42.959 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=1697196148148277598] 2025-06-02 16:53:42.964799 | orchestrator | 16:53:42.964 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-06-02 16:53:42.965716 | orchestrator | 16:53:42.965 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-06-02 16:53:42.967131 | orchestrator | 16:53:42.967 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-06-02 16:53:42.976783 | orchestrator | 16:53:42.976 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-06-02 16:53:42.978748 | orchestrator | 16:53:42.978 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-06-02 16:53:42.979461 | orchestrator | 16:53:42.979 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-06-02 16:53:42.979897 | orchestrator | 16:53:42.979 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-06-02 16:53:42.982347 | orchestrator | 16:53:42.982 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-06-02 16:53:42.991672 | orchestrator | 16:53:42.991 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-06-02 16:53:42.998315 | orchestrator | 16:53:42.998 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-06-02 16:53:48.358751 | orchestrator | 16:53:48.358 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 5s [id=f670e0ca-f607-4ead-aa7f-6cae47991542/d22e3547-dc50-4b67-b48e-5886da7d5148] 2025-06-02 16:53:48.362677 | orchestrator | 16:53:48.362 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 5s [id=2407b0aa-2a97-4f6d-b632-c22fdcb681a0/5d913f80-ed99-4f7f-af77-a272e71d6767] 2025-06-02 16:53:48.392768 | orchestrator | 16:53:48.392 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 5s [id=04c6c6f1-9af5-4b66-ae90-d0f9eb1891e3/6f5db02e-386c-41b9-ae07-b7cce6e0964a] 2025-06-02 16:53:48.443091 | orchestrator | 16:53:48.442 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 5s [id=04c6c6f1-9af5-4b66-ae90-d0f9eb1891e3/fb369b5e-a271-4fa4-9f85-1311171daecb] 2025-06-02 16:53:48.454483 | orchestrator | 16:53:48.453 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 5s [id=f670e0ca-f607-4ead-aa7f-6cae47991542/8b34934e-11eb-4c36-8207-511a42fe0f38] 2025-06-02 16:53:48.463667 | orchestrator | 16:53:48.463 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 5s [id=2407b0aa-2a97-4f6d-b632-c22fdcb681a0/abb01d95-8fd4-488e-8b6c-7cb2a7271361] 2025-06-02 16:53:48.487814 | orchestrator | 16:53:48.487 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 5s [id=f670e0ca-f607-4ead-aa7f-6cae47991542/37a5ef51-3790-4474-9294-da6668d88e33] 2025-06-02 16:53:48.507536 | orchestrator | 16:53:48.507 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 6s [id=04c6c6f1-9af5-4b66-ae90-d0f9eb1891e3/cc6b7f8a-a299-449d-8912-3815da19ff1f] 2025-06-02 16:53:48.520864 | orchestrator | 16:53:48.520 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 6s [id=2407b0aa-2a97-4f6d-b632-c22fdcb681a0/f15aa92f-a864-46a7-a446-d151182076d1] 2025-06-02 16:53:53.001908 | orchestrator | 16:53:53.001 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-06-02 16:54:03.007052 | orchestrator | 16:54:03.006 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-06-02 16:54:03.976690 | orchestrator | 16:54:03.976 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=fb0f1414-b576-4d95-bcca-abe87d84fd87] 2025-06-02 16:54:04.007706 | orchestrator | 16:54:04.007 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-06-02 16:54:04.007867 | orchestrator | 16:54:04.007 STDOUT terraform: Outputs: 2025-06-02 16:54:04.007922 | orchestrator | 16:54:04.007 STDOUT terraform: manager_address = 2025-06-02 16:54:04.007944 | orchestrator | 16:54:04.007 STDOUT terraform: private_key = 2025-06-02 16:54:04.107871 | orchestrator | ok: Runtime: 0:01:33.627545 2025-06-02 16:54:04.143998 | 2025-06-02 16:54:04.144170 | TASK [Fetch manager address] 2025-06-02 16:54:04.609886 | orchestrator | ok 2025-06-02 16:54:04.624329 | 2025-06-02 16:54:04.624605 | TASK [Set manager_host address] 2025-06-02 16:54:04.733142 | orchestrator | ok 2025-06-02 16:54:04.743829 | 2025-06-02 16:54:04.743974 | LOOP [Update ansible collections] 2025-06-02 16:54:05.788502 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-02 16:54:05.788901 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-06-02 16:54:05.788963 | orchestrator | Starting galaxy collection install process 2025-06-02 16:54:05.789002 | orchestrator | Process install dependency map 2025-06-02 16:54:05.789039 | orchestrator | Starting collection install process 2025-06-02 16:54:05.789070 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons' 2025-06-02 16:54:05.789108 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons 2025-06-02 16:54:05.789146 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-06-02 16:54:05.789225 | orchestrator | ok: Item: commons Runtime: 0:00:00.692537 2025-06-02 16:54:06.853611 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-06-02 16:54:06.853834 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-02 16:54:06.853908 | orchestrator | Starting galaxy collection install process 2025-06-02 16:54:06.853959 | orchestrator | Process install dependency map 2025-06-02 16:54:06.854007 | orchestrator | Starting collection install process 2025-06-02 16:54:06.854051 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services' 2025-06-02 16:54:06.854096 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services 2025-06-02 16:54:06.854139 | orchestrator | osism.services:999.0.0 was installed successfully 2025-06-02 16:54:06.854619 | orchestrator | ok: Item: services Runtime: 0:00:00.785485 2025-06-02 16:54:06.880488 | 2025-06-02 16:54:06.880697 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-06-02 16:54:17.538005 | orchestrator | ok 2025-06-02 16:54:17.552061 | 2025-06-02 16:54:17.552227 | TASK [Wait a little longer for the manager so that everything is ready] 2025-06-02 16:55:17.590074 | orchestrator | ok 2025-06-02 16:55:17.597503 | 2025-06-02 16:55:17.597609 | TASK [Fetch manager ssh hostkey] 2025-06-02 16:55:19.176838 | orchestrator | Output suppressed because no_log was given 2025-06-02 16:55:19.192765 | 2025-06-02 16:55:19.193008 | TASK [Get ssh keypair from terraform environment] 2025-06-02 16:55:19.729835 | orchestrator | ok: Runtime: 0:00:00.008344 2025-06-02 16:55:19.745587 | 2025-06-02 16:55:19.745762 | TASK [Point out that the following task takes some time and does not give any output] 2025-06-02 16:55:19.795389 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-06-02 16:55:19.806363 | 2025-06-02 16:55:19.806508 | TASK [Run manager part 0] 2025-06-02 16:55:20.966816 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-02 16:55:21.017293 | orchestrator | 2025-06-02 16:55:21.017396 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-06-02 16:55:21.017404 | orchestrator | 2025-06-02 16:55:21.017418 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-06-02 16:55:22.905291 | orchestrator | ok: [testbed-manager] 2025-06-02 16:55:22.905379 | orchestrator | 2025-06-02 16:55:22.905403 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-06-02 16:55:22.905415 | orchestrator | 2025-06-02 16:55:22.905425 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 16:55:24.864476 | orchestrator | ok: [testbed-manager] 2025-06-02 16:55:24.864645 | orchestrator | 2025-06-02 16:55:24.864671 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-06-02 16:55:25.607353 | orchestrator | ok: [testbed-manager] 2025-06-02 16:55:25.607451 | orchestrator | 2025-06-02 16:55:25.607466 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-06-02 16:55:25.661470 | orchestrator | skipping: [testbed-manager] 2025-06-02 16:55:25.661614 | orchestrator | 2025-06-02 16:55:25.661651 | orchestrator | TASK [Update package cache] **************************************************** 2025-06-02 16:55:25.703989 | orchestrator | skipping: [testbed-manager] 2025-06-02 16:55:25.704068 | orchestrator | 2025-06-02 16:55:25.704078 | orchestrator | TASK [Install required packages] *********************************************** 2025-06-02 16:55:25.747031 | orchestrator | skipping: [testbed-manager] 2025-06-02 16:55:25.747157 | orchestrator | 2025-06-02 16:55:25.747179 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-06-02 16:55:25.784283 | orchestrator | skipping: [testbed-manager] 2025-06-02 16:55:25.784368 | orchestrator | 2025-06-02 16:55:25.784375 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-06-02 16:55:25.819715 | orchestrator | skipping: [testbed-manager] 2025-06-02 16:55:25.819815 | orchestrator | 2025-06-02 16:55:25.819829 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-06-02 16:55:25.857730 | orchestrator | skipping: [testbed-manager] 2025-06-02 16:55:25.857794 | orchestrator | 2025-06-02 16:55:25.857805 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-06-02 16:55:25.902780 | orchestrator | skipping: [testbed-manager] 2025-06-02 16:55:25.902849 | orchestrator | 2025-06-02 16:55:25.902860 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-06-02 16:55:26.757050 | orchestrator | changed: [testbed-manager] 2025-06-02 16:55:26.757118 | orchestrator | 2025-06-02 16:55:26.757125 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-06-02 16:58:48.580884 | orchestrator | changed: [testbed-manager] 2025-06-02 16:58:48.580996 | orchestrator | 2025-06-02 16:58:48.581015 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-06-02 17:00:08.679961 | orchestrator | changed: [testbed-manager] 2025-06-02 17:00:08.680056 | orchestrator | 2025-06-02 17:00:08.680074 | orchestrator | TASK [Install required packages] *********************************************** 2025-06-02 17:00:31.518758 | orchestrator | changed: [testbed-manager] 2025-06-02 17:00:31.518867 | orchestrator | 2025-06-02 17:00:31.518887 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-06-02 17:00:41.175177 | orchestrator | changed: [testbed-manager] 2025-06-02 17:00:41.175407 | orchestrator | 2025-06-02 17:00:41.175429 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-06-02 17:00:41.224661 | orchestrator | ok: [testbed-manager] 2025-06-02 17:00:41.224744 | orchestrator | 2025-06-02 17:00:41.224760 | orchestrator | TASK [Get current user] ******************************************************** 2025-06-02 17:00:42.044874 | orchestrator | ok: [testbed-manager] 2025-06-02 17:00:42.044964 | orchestrator | 2025-06-02 17:00:42.044982 | orchestrator | TASK [Create venv directory] *************************************************** 2025-06-02 17:00:42.806509 | orchestrator | changed: [testbed-manager] 2025-06-02 17:00:42.806606 | orchestrator | 2025-06-02 17:00:42.806622 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-06-02 17:00:49.672661 | orchestrator | changed: [testbed-manager] 2025-06-02 17:00:49.672765 | orchestrator | 2025-06-02 17:00:49.672805 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-06-02 17:00:56.083141 | orchestrator | changed: [testbed-manager] 2025-06-02 17:00:56.083212 | orchestrator | 2025-06-02 17:00:56.083230 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-06-02 17:00:58.964280 | orchestrator | changed: [testbed-manager] 2025-06-02 17:00:58.964345 | orchestrator | 2025-06-02 17:00:58.964354 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-06-02 17:01:00.837056 | orchestrator | changed: [testbed-manager] 2025-06-02 17:01:00.837142 | orchestrator | 2025-06-02 17:01:00.837158 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-06-02 17:01:02.051203 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-06-02 17:01:02.051351 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-06-02 17:01:02.051379 | orchestrator | 2025-06-02 17:01:02.051401 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-06-02 17:01:02.093628 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-06-02 17:01:02.093684 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-06-02 17:01:02.093690 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-06-02 17:01:02.093695 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-06-02 17:01:09.619821 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-06-02 17:01:09.619891 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-06-02 17:01:09.619899 | orchestrator | 2025-06-02 17:01:09.619907 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-06-02 17:01:10.211972 | orchestrator | changed: [testbed-manager] 2025-06-02 17:01:10.212064 | orchestrator | 2025-06-02 17:01:10.212080 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-06-02 17:02:00.165529 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-06-02 17:02:00.165630 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-06-02 17:02:00.165647 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-06-02 17:02:00.165660 | orchestrator | 2025-06-02 17:02:00.165674 | orchestrator | TASK [Install local collections] *********************************************** 2025-06-02 17:02:02.611624 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-06-02 17:02:02.611710 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-06-02 17:02:02.611726 | orchestrator | 2025-06-02 17:02:02.611739 | orchestrator | PLAY [Create operator user] **************************************************** 2025-06-02 17:02:02.611751 | orchestrator | 2025-06-02 17:02:02.611763 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 17:02:04.066966 | orchestrator | ok: [testbed-manager] 2025-06-02 17:02:04.067062 | orchestrator | 2025-06-02 17:02:04.067082 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-06-02 17:02:04.115440 | orchestrator | ok: [testbed-manager] 2025-06-02 17:02:04.115485 | orchestrator | 2025-06-02 17:02:04.115494 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-06-02 17:02:04.183324 | orchestrator | ok: [testbed-manager] 2025-06-02 17:02:04.183371 | orchestrator | 2025-06-02 17:02:04.183381 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-06-02 17:02:05.027975 | orchestrator | changed: [testbed-manager] 2025-06-02 17:02:05.028071 | orchestrator | 2025-06-02 17:02:05.028088 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-06-02 17:02:05.815547 | orchestrator | changed: [testbed-manager] 2025-06-02 17:02:05.816365 | orchestrator | 2025-06-02 17:02:05.816395 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-06-02 17:02:07.293320 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-06-02 17:02:07.293372 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-06-02 17:02:07.293380 | orchestrator | 2025-06-02 17:02:07.293394 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-06-02 17:02:08.783588 | orchestrator | changed: [testbed-manager] 2025-06-02 17:02:08.783715 | orchestrator | 2025-06-02 17:02:08.783732 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-06-02 17:02:10.620360 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-06-02 17:02:10.620410 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-06-02 17:02:10.620417 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-06-02 17:02:10.620422 | orchestrator | 2025-06-02 17:02:10.620428 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-06-02 17:02:11.216042 | orchestrator | changed: [testbed-manager] 2025-06-02 17:02:11.216090 | orchestrator | 2025-06-02 17:02:11.216099 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-06-02 17:02:11.287555 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:02:11.287605 | orchestrator | 2025-06-02 17:02:11.287613 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-06-02 17:02:12.199394 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-02 17:02:12.199520 | orchestrator | changed: [testbed-manager] 2025-06-02 17:02:12.199538 | orchestrator | 2025-06-02 17:02:12.199551 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-06-02 17:02:12.237953 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:02:12.237994 | orchestrator | 2025-06-02 17:02:12.238001 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-06-02 17:02:12.279628 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:02:12.279679 | orchestrator | 2025-06-02 17:02:12.279689 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-06-02 17:02:12.317535 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:02:12.317580 | orchestrator | 2025-06-02 17:02:12.317588 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-06-02 17:02:12.375063 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:02:12.375114 | orchestrator | 2025-06-02 17:02:12.375121 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-06-02 17:02:13.141248 | orchestrator | ok: [testbed-manager] 2025-06-02 17:02:13.141364 | orchestrator | 2025-06-02 17:02:13.141377 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-06-02 17:02:13.141389 | orchestrator | 2025-06-02 17:02:13.141400 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 17:02:14.626774 | orchestrator | ok: [testbed-manager] 2025-06-02 17:02:14.626885 | orchestrator | 2025-06-02 17:02:14.626901 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-06-02 17:02:15.608057 | orchestrator | changed: [testbed-manager] 2025-06-02 17:02:15.608097 | orchestrator | 2025-06-02 17:02:15.608103 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:02:15.608109 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-06-02 17:02:15.608114 | orchestrator | 2025-06-02 17:02:16.114121 | orchestrator | ok: Runtime: 0:06:55.590661 2025-06-02 17:02:16.129881 | 2025-06-02 17:02:16.130030 | TASK [Point out that the log in on the manager is now possible] 2025-06-02 17:02:16.178888 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-06-02 17:02:16.189219 | 2025-06-02 17:02:16.189355 | TASK [Point out that the following task takes some time and does not give any output] 2025-06-02 17:02:16.233993 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-06-02 17:02:16.241846 | 2025-06-02 17:02:16.241967 | TASK [Run manager part 1 + 2] 2025-06-02 17:02:17.190073 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-02 17:02:17.328749 | orchestrator | 2025-06-02 17:02:17.328847 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-06-02 17:02:17.328860 | orchestrator | 2025-06-02 17:02:17.328883 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 17:02:20.417311 | orchestrator | ok: [testbed-manager] 2025-06-02 17:02:20.417373 | orchestrator | 2025-06-02 17:02:20.417399 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-06-02 17:02:20.457006 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:02:20.457070 | orchestrator | 2025-06-02 17:02:20.457082 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-06-02 17:02:20.496003 | orchestrator | ok: [testbed-manager] 2025-06-02 17:02:20.496069 | orchestrator | 2025-06-02 17:02:20.496079 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-02 17:02:20.536261 | orchestrator | ok: [testbed-manager] 2025-06-02 17:02:20.536352 | orchestrator | 2025-06-02 17:02:20.536362 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-02 17:02:20.603206 | orchestrator | ok: [testbed-manager] 2025-06-02 17:02:20.603302 | orchestrator | 2025-06-02 17:02:20.603314 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-02 17:02:20.678001 | orchestrator | ok: [testbed-manager] 2025-06-02 17:02:20.678110 | orchestrator | 2025-06-02 17:02:20.678122 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-02 17:02:20.741585 | orchestrator | included: /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-06-02 17:02:20.741647 | orchestrator | 2025-06-02 17:02:20.741652 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-02 17:02:21.496612 | orchestrator | ok: [testbed-manager] 2025-06-02 17:02:21.496698 | orchestrator | 2025-06-02 17:02:21.496710 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-02 17:02:21.548157 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:02:21.548229 | orchestrator | 2025-06-02 17:02:21.548238 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-02 17:02:22.934209 | orchestrator | changed: [testbed-manager] 2025-06-02 17:02:22.934331 | orchestrator | 2025-06-02 17:02:22.934346 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-02 17:02:23.570002 | orchestrator | ok: [testbed-manager] 2025-06-02 17:02:23.570112 | orchestrator | 2025-06-02 17:02:23.570123 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-02 17:02:24.840743 | orchestrator | changed: [testbed-manager] 2025-06-02 17:02:24.840825 | orchestrator | 2025-06-02 17:02:24.840839 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-02 17:02:38.023056 | orchestrator | changed: [testbed-manager] 2025-06-02 17:02:38.023197 | orchestrator | 2025-06-02 17:02:38.023206 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-06-02 17:02:38.670386 | orchestrator | ok: [testbed-manager] 2025-06-02 17:02:38.670424 | orchestrator | 2025-06-02 17:02:38.670432 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-06-02 17:02:38.718775 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:02:38.718810 | orchestrator | 2025-06-02 17:02:38.718816 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-06-02 17:02:39.680456 | orchestrator | changed: [testbed-manager] 2025-06-02 17:02:39.680524 | orchestrator | 2025-06-02 17:02:39.680539 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-06-02 17:02:40.628467 | orchestrator | changed: [testbed-manager] 2025-06-02 17:02:40.628503 | orchestrator | 2025-06-02 17:02:40.628509 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-06-02 17:02:41.198970 | orchestrator | changed: [testbed-manager] 2025-06-02 17:02:41.199012 | orchestrator | 2025-06-02 17:02:41.199021 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-06-02 17:02:41.237378 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-06-02 17:02:41.237465 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-06-02 17:02:41.237475 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-06-02 17:02:41.237497 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-06-02 17:02:43.346818 | orchestrator | changed: [testbed-manager] 2025-06-02 17:02:43.346871 | orchestrator | 2025-06-02 17:02:43.346878 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-06-02 17:02:52.817025 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-06-02 17:02:52.817167 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-06-02 17:02:52.817175 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-06-02 17:02:52.817179 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-06-02 17:02:52.817187 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-06-02 17:02:52.817191 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-06-02 17:02:52.817195 | orchestrator | 2025-06-02 17:02:52.817199 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-06-02 17:02:53.883771 | orchestrator | changed: [testbed-manager] 2025-06-02 17:02:53.883808 | orchestrator | 2025-06-02 17:02:53.883816 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-06-02 17:02:53.931887 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:02:53.931969 | orchestrator | 2025-06-02 17:02:53.931985 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-06-02 17:02:57.270190 | orchestrator | changed: [testbed-manager] 2025-06-02 17:02:57.270237 | orchestrator | 2025-06-02 17:02:57.270247 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-06-02 17:02:57.312919 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:02:57.312959 | orchestrator | 2025-06-02 17:02:57.312967 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-06-02 17:04:41.761181 | orchestrator | changed: [testbed-manager] 2025-06-02 17:04:41.761226 | orchestrator | 2025-06-02 17:04:41.761235 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-02 17:04:42.938147 | orchestrator | ok: [testbed-manager] 2025-06-02 17:04:42.938189 | orchestrator | 2025-06-02 17:04:42.938197 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:04:42.938204 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-06-02 17:04:42.938209 | orchestrator | 2025-06-02 17:04:43.404516 | orchestrator | ok: Runtime: 0:02:26.438000 2025-06-02 17:04:43.420788 | 2025-06-02 17:04:43.420927 | TASK [Reboot manager] 2025-06-02 17:04:44.956483 | orchestrator | ok: Runtime: 0:00:00.986282 2025-06-02 17:04:44.973307 | 2025-06-02 17:04:44.973479 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-06-02 17:05:01.393720 | orchestrator | ok 2025-06-02 17:05:01.405656 | 2025-06-02 17:05:01.405866 | TASK [Wait a little longer for the manager so that everything is ready] 2025-06-02 17:06:01.448560 | orchestrator | ok 2025-06-02 17:06:01.458533 | 2025-06-02 17:06:01.458673 | TASK [Deploy manager + bootstrap nodes] 2025-06-02 17:06:04.173963 | orchestrator | 2025-06-02 17:06:04.174190 | orchestrator | # DEPLOY MANAGER 2025-06-02 17:06:04.174215 | orchestrator | 2025-06-02 17:06:04.174230 | orchestrator | + set -e 2025-06-02 17:06:04.174268 | orchestrator | + echo 2025-06-02 17:06:04.174283 | orchestrator | + echo '# DEPLOY MANAGER' 2025-06-02 17:06:04.174300 | orchestrator | + echo 2025-06-02 17:06:04.174350 | orchestrator | + cat /opt/manager-vars.sh 2025-06-02 17:06:04.177549 | orchestrator | export NUMBER_OF_NODES=6 2025-06-02 17:06:04.177589 | orchestrator | 2025-06-02 17:06:04.177609 | orchestrator | export CEPH_VERSION=reef 2025-06-02 17:06:04.177694 | orchestrator | export CONFIGURATION_VERSION=main 2025-06-02 17:06:04.177722 | orchestrator | export MANAGER_VERSION=9.1.0 2025-06-02 17:06:04.177755 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-06-02 17:06:04.177767 | orchestrator | 2025-06-02 17:06:04.177786 | orchestrator | export ARA=false 2025-06-02 17:06:04.177797 | orchestrator | export DEPLOY_MODE=manager 2025-06-02 17:06:04.177815 | orchestrator | export TEMPEST=false 2025-06-02 17:06:04.177827 | orchestrator | export IS_ZUUL=true 2025-06-02 17:06:04.177837 | orchestrator | 2025-06-02 17:06:04.177856 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.157 2025-06-02 17:06:04.177868 | orchestrator | export EXTERNAL_API=false 2025-06-02 17:06:04.177878 | orchestrator | 2025-06-02 17:06:04.177889 | orchestrator | export IMAGE_USER=ubuntu 2025-06-02 17:06:04.177904 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-06-02 17:06:04.177915 | orchestrator | 2025-06-02 17:06:04.177925 | orchestrator | export CEPH_STACK=ceph-ansible 2025-06-02 17:06:04.177945 | orchestrator | 2025-06-02 17:06:04.177956 | orchestrator | + echo 2025-06-02 17:06:04.177969 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-02 17:06:04.178702 | orchestrator | ++ export INTERACTIVE=false 2025-06-02 17:06:04.178734 | orchestrator | ++ INTERACTIVE=false 2025-06-02 17:06:04.178748 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-02 17:06:04.178762 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-02 17:06:04.178775 | orchestrator | + source /opt/manager-vars.sh 2025-06-02 17:06:04.178787 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-02 17:06:04.178800 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-02 17:06:04.178813 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-02 17:06:04.178826 | orchestrator | ++ CEPH_VERSION=reef 2025-06-02 17:06:04.178845 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-02 17:06:04.178858 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-02 17:06:04.178871 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-02 17:06:04.178884 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-02 17:06:04.178898 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-02 17:06:04.178929 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-02 17:06:04.178950 | orchestrator | ++ export ARA=false 2025-06-02 17:06:04.178968 | orchestrator | ++ ARA=false 2025-06-02 17:06:04.178988 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-02 17:06:04.179007 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-02 17:06:04.179021 | orchestrator | ++ export TEMPEST=false 2025-06-02 17:06:04.179032 | orchestrator | ++ TEMPEST=false 2025-06-02 17:06:04.179043 | orchestrator | ++ export IS_ZUUL=true 2025-06-02 17:06:04.179054 | orchestrator | ++ IS_ZUUL=true 2025-06-02 17:06:04.179064 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.157 2025-06-02 17:06:04.179075 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.157 2025-06-02 17:06:04.179086 | orchestrator | ++ export EXTERNAL_API=false 2025-06-02 17:06:04.179097 | orchestrator | ++ EXTERNAL_API=false 2025-06-02 17:06:04.179108 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-02 17:06:04.179118 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-02 17:06:04.179129 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-02 17:06:04.179140 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-02 17:06:04.179151 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-02 17:06:04.179162 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-02 17:06:04.179178 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-06-02 17:06:04.240917 | orchestrator | + docker version 2025-06-02 17:06:04.536580 | orchestrator | Client: Docker Engine - Community 2025-06-02 17:06:04.536688 | orchestrator | Version: 27.5.1 2025-06-02 17:06:04.536705 | orchestrator | API version: 1.47 2025-06-02 17:06:04.536717 | orchestrator | Go version: go1.22.11 2025-06-02 17:06:04.536734 | orchestrator | Git commit: 9f9e405 2025-06-02 17:06:04.536754 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-06-02 17:06:04.536775 | orchestrator | OS/Arch: linux/amd64 2025-06-02 17:06:04.536794 | orchestrator | Context: default 2025-06-02 17:06:04.536814 | orchestrator | 2025-06-02 17:06:04.536834 | orchestrator | Server: Docker Engine - Community 2025-06-02 17:06:04.536854 | orchestrator | Engine: 2025-06-02 17:06:04.536873 | orchestrator | Version: 27.5.1 2025-06-02 17:06:04.536892 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-06-02 17:06:04.536976 | orchestrator | Go version: go1.22.11 2025-06-02 17:06:04.536994 | orchestrator | Git commit: 4c9b3b0 2025-06-02 17:06:04.537005 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-06-02 17:06:04.537016 | orchestrator | OS/Arch: linux/amd64 2025-06-02 17:06:04.537026 | orchestrator | Experimental: false 2025-06-02 17:06:04.537037 | orchestrator | containerd: 2025-06-02 17:06:04.537048 | orchestrator | Version: 1.7.27 2025-06-02 17:06:04.537059 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-06-02 17:06:04.537071 | orchestrator | runc: 2025-06-02 17:06:04.537082 | orchestrator | Version: 1.2.5 2025-06-02 17:06:04.537093 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-06-02 17:06:04.537104 | orchestrator | docker-init: 2025-06-02 17:06:04.537129 | orchestrator | Version: 0.19.0 2025-06-02 17:06:04.537141 | orchestrator | GitCommit: de40ad0 2025-06-02 17:06:04.541689 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-06-02 17:06:04.552107 | orchestrator | + set -e 2025-06-02 17:06:04.552138 | orchestrator | + source /opt/manager-vars.sh 2025-06-02 17:06:04.552149 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-02 17:06:04.552160 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-02 17:06:04.552171 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-02 17:06:04.552182 | orchestrator | ++ CEPH_VERSION=reef 2025-06-02 17:06:04.552193 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-02 17:06:04.552203 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-02 17:06:04.552214 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-02 17:06:04.552225 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-02 17:06:04.552261 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-02 17:06:04.552275 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-02 17:06:04.552286 | orchestrator | ++ export ARA=false 2025-06-02 17:06:04.552297 | orchestrator | ++ ARA=false 2025-06-02 17:06:04.552308 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-02 17:06:04.552318 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-02 17:06:04.552329 | orchestrator | ++ export TEMPEST=false 2025-06-02 17:06:04.552340 | orchestrator | ++ TEMPEST=false 2025-06-02 17:06:04.552350 | orchestrator | ++ export IS_ZUUL=true 2025-06-02 17:06:04.552361 | orchestrator | ++ IS_ZUUL=true 2025-06-02 17:06:04.552372 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.157 2025-06-02 17:06:04.552383 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.157 2025-06-02 17:06:04.552394 | orchestrator | ++ export EXTERNAL_API=false 2025-06-02 17:06:04.552405 | orchestrator | ++ EXTERNAL_API=false 2025-06-02 17:06:04.552416 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-02 17:06:04.552426 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-02 17:06:04.552437 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-02 17:06:04.552448 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-02 17:06:04.552459 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-02 17:06:04.552470 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-02 17:06:04.552481 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-02 17:06:04.552491 | orchestrator | ++ export INTERACTIVE=false 2025-06-02 17:06:04.552502 | orchestrator | ++ INTERACTIVE=false 2025-06-02 17:06:04.552513 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-02 17:06:04.552528 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-02 17:06:04.552539 | orchestrator | + [[ 9.1.0 != \l\a\t\e\s\t ]] 2025-06-02 17:06:04.552550 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.1.0 2025-06-02 17:06:04.559954 | orchestrator | + set -e 2025-06-02 17:06:04.560011 | orchestrator | + VERSION=9.1.0 2025-06-02 17:06:04.560025 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.1.0/g' /opt/configuration/environments/manager/configuration.yml 2025-06-02 17:06:04.566778 | orchestrator | + [[ 9.1.0 != \l\a\t\e\s\t ]] 2025-06-02 17:06:04.566866 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2025-06-02 17:06:04.571779 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2025-06-02 17:06:04.575962 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2025-06-02 17:06:04.582884 | orchestrator | /opt/configuration ~ 2025-06-02 17:06:04.582923 | orchestrator | + set -e 2025-06-02 17:06:04.582935 | orchestrator | + pushd /opt/configuration 2025-06-02 17:06:04.582946 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-02 17:06:04.585746 | orchestrator | + source /opt/venv/bin/activate 2025-06-02 17:06:04.587646 | orchestrator | ++ deactivate nondestructive 2025-06-02 17:06:04.587685 | orchestrator | ++ '[' -n '' ']' 2025-06-02 17:06:04.587699 | orchestrator | ++ '[' -n '' ']' 2025-06-02 17:06:04.587733 | orchestrator | ++ hash -r 2025-06-02 17:06:04.587745 | orchestrator | ++ '[' -n '' ']' 2025-06-02 17:06:04.587756 | orchestrator | ++ unset VIRTUAL_ENV 2025-06-02 17:06:04.587766 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-06-02 17:06:04.587777 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-06-02 17:06:04.587789 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-06-02 17:06:04.587800 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-06-02 17:06:04.587811 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-06-02 17:06:04.587822 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-06-02 17:06:04.587833 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-02 17:06:04.587845 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-02 17:06:04.587856 | orchestrator | ++ export PATH 2025-06-02 17:06:04.587867 | orchestrator | ++ '[' -n '' ']' 2025-06-02 17:06:04.587878 | orchestrator | ++ '[' -z '' ']' 2025-06-02 17:06:04.587889 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-06-02 17:06:04.587900 | orchestrator | ++ PS1='(venv) ' 2025-06-02 17:06:04.587911 | orchestrator | ++ export PS1 2025-06-02 17:06:04.587922 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-06-02 17:06:04.587932 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-06-02 17:06:04.587943 | orchestrator | ++ hash -r 2025-06-02 17:06:04.587955 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2025-06-02 17:06:05.970625 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2025-06-02 17:06:05.971487 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.3) 2025-06-02 17:06:05.973128 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2025-06-02 17:06:05.974220 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.2) 2025-06-02 17:06:05.975716 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (25.0) 2025-06-02 17:06:05.986059 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.2.1) 2025-06-02 17:06:05.987478 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2025-06-02 17:06:05.988728 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.19) 2025-06-02 17:06:05.989827 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2025-06-02 17:06:06.033272 | orchestrator | Requirement already satisfied: charset-normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.2) 2025-06-02 17:06:06.034789 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.10) 2025-06-02 17:06:06.036260 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.4.0) 2025-06-02 17:06:06.037826 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2025.4.26) 2025-06-02 17:06:06.042069 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.2) 2025-06-02 17:06:06.316903 | orchestrator | ++ which gilt 2025-06-02 17:06:06.319783 | orchestrator | + GILT=/opt/venv/bin/gilt 2025-06-02 17:06:06.319820 | orchestrator | + /opt/venv/bin/gilt overlay 2025-06-02 17:06:06.593531 | orchestrator | osism.cfg-generics: 2025-06-02 17:06:06.774428 | orchestrator | - copied (v0.20250530.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2025-06-02 17:06:06.774624 | orchestrator | - copied (v0.20250530.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2025-06-02 17:06:06.774642 | orchestrator | - copied (v0.20250530.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2025-06-02 17:06:06.774656 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2025-06-02 17:06:07.464044 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2025-06-02 17:06:07.475459 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2025-06-02 17:06:07.967577 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2025-06-02 17:06:08.015105 | orchestrator | ~ 2025-06-02 17:06:08.015199 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-02 17:06:08.015216 | orchestrator | + deactivate 2025-06-02 17:06:08.015229 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-06-02 17:06:08.015281 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-02 17:06:08.015294 | orchestrator | + export PATH 2025-06-02 17:06:08.015305 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-06-02 17:06:08.015316 | orchestrator | + '[' -n '' ']' 2025-06-02 17:06:08.015330 | orchestrator | + hash -r 2025-06-02 17:06:08.015341 | orchestrator | + '[' -n '' ']' 2025-06-02 17:06:08.015353 | orchestrator | + unset VIRTUAL_ENV 2025-06-02 17:06:08.015364 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-06-02 17:06:08.015375 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-06-02 17:06:08.015387 | orchestrator | + unset -f deactivate 2025-06-02 17:06:08.015398 | orchestrator | + popd 2025-06-02 17:06:08.016603 | orchestrator | + [[ 9.1.0 == \l\a\t\e\s\t ]] 2025-06-02 17:06:08.016631 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-06-02 17:06:08.018117 | orchestrator | ++ semver 9.1.0 7.0.0 2025-06-02 17:06:08.089171 | orchestrator | + [[ 1 -ge 0 ]] 2025-06-02 17:06:08.089289 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-06-02 17:06:08.089307 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-06-02 17:06:08.139228 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-02 17:06:08.139338 | orchestrator | + source /opt/venv/bin/activate 2025-06-02 17:06:08.139352 | orchestrator | ++ deactivate nondestructive 2025-06-02 17:06:08.139364 | orchestrator | ++ '[' -n '' ']' 2025-06-02 17:06:08.139375 | orchestrator | ++ '[' -n '' ']' 2025-06-02 17:06:08.139386 | orchestrator | ++ hash -r 2025-06-02 17:06:08.139410 | orchestrator | ++ '[' -n '' ']' 2025-06-02 17:06:08.139436 | orchestrator | ++ unset VIRTUAL_ENV 2025-06-02 17:06:08.139448 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-06-02 17:06:08.139470 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-06-02 17:06:08.139482 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-06-02 17:06:08.139493 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-06-02 17:06:08.139504 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-06-02 17:06:08.139516 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-06-02 17:06:08.139528 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-02 17:06:08.139540 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-02 17:06:08.139578 | orchestrator | ++ export PATH 2025-06-02 17:06:08.139589 | orchestrator | ++ '[' -n '' ']' 2025-06-02 17:06:08.139606 | orchestrator | ++ '[' -z '' ']' 2025-06-02 17:06:08.139617 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-06-02 17:06:08.139628 | orchestrator | ++ PS1='(venv) ' 2025-06-02 17:06:08.139639 | orchestrator | ++ export PS1 2025-06-02 17:06:08.139650 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-06-02 17:06:08.139661 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-06-02 17:06:08.139675 | orchestrator | ++ hash -r 2025-06-02 17:06:08.139687 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-06-02 17:06:09.470341 | orchestrator | 2025-06-02 17:06:09.471233 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-06-02 17:06:09.471290 | orchestrator | 2025-06-02 17:06:09.471306 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-02 17:06:10.099720 | orchestrator | ok: [testbed-manager] 2025-06-02 17:06:10.099833 | orchestrator | 2025-06-02 17:06:10.099850 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-06-02 17:06:11.242201 | orchestrator | changed: [testbed-manager] 2025-06-02 17:06:11.242359 | orchestrator | 2025-06-02 17:06:11.242377 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-06-02 17:06:11.242391 | orchestrator | 2025-06-02 17:06:11.242403 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 17:06:13.826187 | orchestrator | ok: [testbed-manager] 2025-06-02 17:06:13.826353 | orchestrator | 2025-06-02 17:06:13.826371 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-06-02 17:06:13.887461 | orchestrator | ok: [testbed-manager] 2025-06-02 17:06:13.887545 | orchestrator | 2025-06-02 17:06:13.887558 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-06-02 17:06:14.382545 | orchestrator | changed: [testbed-manager] 2025-06-02 17:06:14.382657 | orchestrator | 2025-06-02 17:06:14.382675 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-06-02 17:06:14.424817 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:06:14.424914 | orchestrator | 2025-06-02 17:06:14.424928 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-06-02 17:06:14.802319 | orchestrator | changed: [testbed-manager] 2025-06-02 17:06:14.802427 | orchestrator | 2025-06-02 17:06:14.802442 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-06-02 17:06:14.859633 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:06:14.859730 | orchestrator | 2025-06-02 17:06:14.859745 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-06-02 17:06:15.244721 | orchestrator | ok: [testbed-manager] 2025-06-02 17:06:15.244834 | orchestrator | 2025-06-02 17:06:15.244851 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-06-02 17:06:15.367280 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:06:15.367378 | orchestrator | 2025-06-02 17:06:15.367391 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-06-02 17:06:15.367403 | orchestrator | 2025-06-02 17:06:15.367415 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 17:06:17.309117 | orchestrator | ok: [testbed-manager] 2025-06-02 17:06:17.309227 | orchestrator | 2025-06-02 17:06:17.309288 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-06-02 17:06:17.419035 | orchestrator | included: osism.services.traefik for testbed-manager 2025-06-02 17:06:17.419136 | orchestrator | 2025-06-02 17:06:17.419152 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-06-02 17:06:17.480824 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-06-02 17:06:17.480907 | orchestrator | 2025-06-02 17:06:17.480921 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-06-02 17:06:18.676785 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-06-02 17:06:18.676898 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-06-02 17:06:18.676916 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-06-02 17:06:18.676928 | orchestrator | 2025-06-02 17:06:18.676941 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-06-02 17:06:20.664446 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-06-02 17:06:20.664562 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-06-02 17:06:20.664579 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-06-02 17:06:20.664592 | orchestrator | 2025-06-02 17:06:20.664605 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-06-02 17:06:21.364093 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-02 17:06:21.364197 | orchestrator | changed: [testbed-manager] 2025-06-02 17:06:21.364214 | orchestrator | 2025-06-02 17:06:21.364226 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-06-02 17:06:22.061913 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-02 17:06:22.062129 | orchestrator | changed: [testbed-manager] 2025-06-02 17:06:22.062171 | orchestrator | 2025-06-02 17:06:22.062204 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-06-02 17:06:22.117447 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:06:22.117533 | orchestrator | 2025-06-02 17:06:22.117546 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-06-02 17:06:22.493020 | orchestrator | ok: [testbed-manager] 2025-06-02 17:06:22.493123 | orchestrator | 2025-06-02 17:06:22.493139 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-06-02 17:06:22.573201 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-06-02 17:06:22.573332 | orchestrator | 2025-06-02 17:06:22.573347 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-06-02 17:06:23.703166 | orchestrator | changed: [testbed-manager] 2025-06-02 17:06:23.703341 | orchestrator | 2025-06-02 17:06:23.703360 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-06-02 17:06:24.620969 | orchestrator | changed: [testbed-manager] 2025-06-02 17:06:24.621088 | orchestrator | 2025-06-02 17:06:24.621104 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-06-02 17:06:36.448983 | orchestrator | changed: [testbed-manager] 2025-06-02 17:06:36.449110 | orchestrator | 2025-06-02 17:06:36.449151 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-06-02 17:06:36.518234 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:06:36.518343 | orchestrator | 2025-06-02 17:06:36.518359 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-06-02 17:06:36.518371 | orchestrator | 2025-06-02 17:06:36.518383 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 17:06:38.477846 | orchestrator | ok: [testbed-manager] 2025-06-02 17:06:38.477954 | orchestrator | 2025-06-02 17:06:38.477971 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-06-02 17:06:38.592141 | orchestrator | included: osism.services.manager for testbed-manager 2025-06-02 17:06:38.592276 | orchestrator | 2025-06-02 17:06:38.592301 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-06-02 17:06:38.654202 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-06-02 17:06:38.654331 | orchestrator | 2025-06-02 17:06:38.654346 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-06-02 17:06:41.627536 | orchestrator | ok: [testbed-manager] 2025-06-02 17:06:41.627653 | orchestrator | 2025-06-02 17:06:41.627667 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-06-02 17:06:41.688036 | orchestrator | ok: [testbed-manager] 2025-06-02 17:06:41.688142 | orchestrator | 2025-06-02 17:06:41.688167 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-06-02 17:06:41.827898 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-06-02 17:06:41.827999 | orchestrator | 2025-06-02 17:06:41.828014 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-06-02 17:06:44.929055 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-06-02 17:06:44.929173 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-06-02 17:06:44.929189 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-06-02 17:06:44.929203 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-06-02 17:06:44.929214 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-06-02 17:06:44.929226 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-06-02 17:06:44.929237 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-06-02 17:06:44.929288 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-06-02 17:06:44.929305 | orchestrator | 2025-06-02 17:06:44.929320 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-06-02 17:06:45.647816 | orchestrator | changed: [testbed-manager] 2025-06-02 17:06:45.647922 | orchestrator | 2025-06-02 17:06:45.647938 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-06-02 17:06:46.358519 | orchestrator | changed: [testbed-manager] 2025-06-02 17:06:46.358637 | orchestrator | 2025-06-02 17:06:46.358654 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-06-02 17:06:46.452326 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-06-02 17:06:46.452423 | orchestrator | 2025-06-02 17:06:46.452437 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-06-02 17:06:47.807633 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-06-02 17:06:47.807741 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-06-02 17:06:47.807757 | orchestrator | 2025-06-02 17:06:47.807769 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-06-02 17:06:48.609125 | orchestrator | changed: [testbed-manager] 2025-06-02 17:06:48.609232 | orchestrator | 2025-06-02 17:06:48.609309 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-06-02 17:06:48.675814 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:06:48.675867 | orchestrator | 2025-06-02 17:06:48.675880 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-06-02 17:06:48.747137 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-06-02 17:06:48.747175 | orchestrator | 2025-06-02 17:06:48.747187 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-06-02 17:06:50.287731 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-02 17:06:50.287832 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-02 17:06:50.287848 | orchestrator | changed: [testbed-manager] 2025-06-02 17:06:50.287862 | orchestrator | 2025-06-02 17:06:50.287875 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-06-02 17:06:50.988596 | orchestrator | changed: [testbed-manager] 2025-06-02 17:06:50.988715 | orchestrator | 2025-06-02 17:06:50.988733 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-06-02 17:06:51.053140 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:06:51.053228 | orchestrator | 2025-06-02 17:06:51.053240 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-06-02 17:06:51.156539 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-06-02 17:06:51.156589 | orchestrator | 2025-06-02 17:06:51.156602 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-06-02 17:06:51.744305 | orchestrator | changed: [testbed-manager] 2025-06-02 17:06:51.744417 | orchestrator | 2025-06-02 17:06:51.744435 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-06-02 17:06:52.192710 | orchestrator | changed: [testbed-manager] 2025-06-02 17:06:52.192814 | orchestrator | 2025-06-02 17:06:52.192849 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-06-02 17:06:53.554476 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-06-02 17:06:53.554589 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-06-02 17:06:53.554607 | orchestrator | 2025-06-02 17:06:53.554620 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-06-02 17:06:54.251020 | orchestrator | changed: [testbed-manager] 2025-06-02 17:06:54.251124 | orchestrator | 2025-06-02 17:06:54.251139 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-06-02 17:06:54.688186 | orchestrator | ok: [testbed-manager] 2025-06-02 17:06:54.688327 | orchestrator | 2025-06-02 17:06:54.688345 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-06-02 17:06:55.050553 | orchestrator | changed: [testbed-manager] 2025-06-02 17:06:55.050650 | orchestrator | 2025-06-02 17:06:55.050664 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-06-02 17:06:55.087739 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:06:55.087802 | orchestrator | 2025-06-02 17:06:55.087819 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-06-02 17:06:55.157077 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-06-02 17:06:55.157219 | orchestrator | 2025-06-02 17:06:55.157288 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-06-02 17:06:55.209721 | orchestrator | ok: [testbed-manager] 2025-06-02 17:06:55.209767 | orchestrator | 2025-06-02 17:06:55.209779 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-06-02 17:06:57.151318 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-06-02 17:06:57.151476 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-06-02 17:06:57.151493 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-06-02 17:06:57.151505 | orchestrator | 2025-06-02 17:06:57.151517 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-06-02 17:06:57.885007 | orchestrator | changed: [testbed-manager] 2025-06-02 17:06:57.885115 | orchestrator | 2025-06-02 17:06:57.885131 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-06-02 17:06:58.623952 | orchestrator | changed: [testbed-manager] 2025-06-02 17:06:58.624061 | orchestrator | 2025-06-02 17:06:58.624078 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-06-02 17:06:59.393434 | orchestrator | changed: [testbed-manager] 2025-06-02 17:06:59.393545 | orchestrator | 2025-06-02 17:06:59.393561 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-06-02 17:06:59.479276 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-06-02 17:06:59.479380 | orchestrator | 2025-06-02 17:06:59.479396 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-06-02 17:06:59.538514 | orchestrator | ok: [testbed-manager] 2025-06-02 17:06:59.538594 | orchestrator | 2025-06-02 17:06:59.538608 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-06-02 17:07:00.310979 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-06-02 17:07:00.311083 | orchestrator | 2025-06-02 17:07:00.311099 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-06-02 17:07:00.402946 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-06-02 17:07:00.403051 | orchestrator | 2025-06-02 17:07:00.403066 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-06-02 17:07:01.168749 | orchestrator | changed: [testbed-manager] 2025-06-02 17:07:01.168844 | orchestrator | 2025-06-02 17:07:01.168859 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-06-02 17:07:01.857235 | orchestrator | ok: [testbed-manager] 2025-06-02 17:07:01.857396 | orchestrator | 2025-06-02 17:07:01.857419 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-06-02 17:07:01.920899 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:07:01.920990 | orchestrator | 2025-06-02 17:07:01.921005 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-06-02 17:07:01.984037 | orchestrator | ok: [testbed-manager] 2025-06-02 17:07:01.984120 | orchestrator | 2025-06-02 17:07:01.984134 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-06-02 17:07:02.874913 | orchestrator | changed: [testbed-manager] 2025-06-02 17:07:02.875002 | orchestrator | 2025-06-02 17:07:02.875017 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-06-02 17:08:10.619340 | orchestrator | changed: [testbed-manager] 2025-06-02 17:08:10.619463 | orchestrator | 2025-06-02 17:08:10.619480 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-06-02 17:08:11.639557 | orchestrator | ok: [testbed-manager] 2025-06-02 17:08:11.639654 | orchestrator | 2025-06-02 17:08:11.639670 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-06-02 17:08:11.697540 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:08:11.697624 | orchestrator | 2025-06-02 17:08:11.697637 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-06-02 17:08:14.457233 | orchestrator | changed: [testbed-manager] 2025-06-02 17:08:14.457386 | orchestrator | 2025-06-02 17:08:14.457404 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-06-02 17:08:14.523489 | orchestrator | ok: [testbed-manager] 2025-06-02 17:08:14.523592 | orchestrator | 2025-06-02 17:08:14.523607 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-06-02 17:08:14.523620 | orchestrator | 2025-06-02 17:08:14.523632 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-06-02 17:08:14.575708 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:08:14.575793 | orchestrator | 2025-06-02 17:08:14.575839 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-06-02 17:09:14.635417 | orchestrator | Pausing for 60 seconds 2025-06-02 17:09:14.635540 | orchestrator | changed: [testbed-manager] 2025-06-02 17:09:14.635558 | orchestrator | 2025-06-02 17:09:14.635571 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-06-02 17:09:19.841254 | orchestrator | changed: [testbed-manager] 2025-06-02 17:09:19.841422 | orchestrator | 2025-06-02 17:09:19.841440 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-06-02 17:10:01.467886 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-06-02 17:10:01.468003 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-06-02 17:10:01.468019 | orchestrator | changed: [testbed-manager] 2025-06-02 17:10:01.468032 | orchestrator | 2025-06-02 17:10:01.468045 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-06-02 17:10:11.305623 | orchestrator | changed: [testbed-manager] 2025-06-02 17:10:11.305770 | orchestrator | 2025-06-02 17:10:11.305807 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-06-02 17:10:11.399474 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-06-02 17:10:11.399567 | orchestrator | 2025-06-02 17:10:11.399581 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-06-02 17:10:11.399594 | orchestrator | 2025-06-02 17:10:11.399605 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-06-02 17:10:11.455425 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:10:11.455520 | orchestrator | 2025-06-02 17:10:11.455534 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:10:11.455548 | orchestrator | testbed-manager : ok=64 changed=35 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-06-02 17:10:11.455560 | orchestrator | 2025-06-02 17:10:11.566511 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-02 17:10:11.566615 | orchestrator | + deactivate 2025-06-02 17:10:11.566630 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-06-02 17:10:11.566644 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-02 17:10:11.566655 | orchestrator | + export PATH 2025-06-02 17:10:11.566672 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-06-02 17:10:11.566683 | orchestrator | + '[' -n '' ']' 2025-06-02 17:10:11.566702 | orchestrator | + hash -r 2025-06-02 17:10:11.566721 | orchestrator | + '[' -n '' ']' 2025-06-02 17:10:11.566740 | orchestrator | + unset VIRTUAL_ENV 2025-06-02 17:10:11.566757 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-06-02 17:10:11.566777 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-06-02 17:10:11.566797 | orchestrator | + unset -f deactivate 2025-06-02 17:10:11.566817 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-06-02 17:10:11.572792 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-06-02 17:10:11.572834 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-06-02 17:10:11.572850 | orchestrator | + local max_attempts=60 2025-06-02 17:10:11.572863 | orchestrator | + local name=ceph-ansible 2025-06-02 17:10:11.572876 | orchestrator | + local attempt_num=1 2025-06-02 17:10:11.573848 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 17:10:11.615336 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-02 17:10:11.615411 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-06-02 17:10:11.615424 | orchestrator | + local max_attempts=60 2025-06-02 17:10:11.615435 | orchestrator | + local name=kolla-ansible 2025-06-02 17:10:11.615447 | orchestrator | + local attempt_num=1 2025-06-02 17:10:11.616305 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-06-02 17:10:11.651550 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-02 17:10:11.651600 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-06-02 17:10:11.651614 | orchestrator | + local max_attempts=60 2025-06-02 17:10:11.651626 | orchestrator | + local name=osism-ansible 2025-06-02 17:10:11.651638 | orchestrator | + local attempt_num=1 2025-06-02 17:10:11.652053 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-06-02 17:10:11.696138 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-02 17:10:11.696194 | orchestrator | + [[ true == \t\r\u\e ]] 2025-06-02 17:10:11.696206 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-06-02 17:10:12.489916 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-06-02 17:10:12.702139 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-06-02 17:10:12.702241 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20250530.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-06-02 17:10:12.702258 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20250530.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-06-02 17:10:12.702270 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-06-02 17:10:12.702320 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-06-02 17:10:12.702333 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-06-02 17:10:12.702344 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-06-02 17:10:12.702355 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20250530.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 53 seconds (healthy) 2025-06-02 17:10:12.702365 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-06-02 17:10:12.702376 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.7.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-06-02 17:10:12.702387 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-06-02 17:10:12.702398 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-06-02 17:10:12.702409 | orchestrator | manager-watchdog-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" watchdog About a minute ago Up About a minute (healthy) 2025-06-02 17:10:12.702420 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20250531.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-06-02 17:10:12.702430 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20250530.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-06-02 17:10:12.702441 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-06-02 17:10:12.710819 | orchestrator | ++ semver 9.1.0 7.0.0 2025-06-02 17:10:12.757632 | orchestrator | + [[ 1 -ge 0 ]] 2025-06-02 17:10:12.757701 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-06-02 17:10:12.762898 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-06-02 17:10:14.633660 | orchestrator | Registering Redlock._acquired_script 2025-06-02 17:10:14.633765 | orchestrator | Registering Redlock._extend_script 2025-06-02 17:10:14.633779 | orchestrator | Registering Redlock._release_script 2025-06-02 17:10:14.853333 | orchestrator | 2025-06-02 17:10:14 | INFO  | Task 4bd54665-03f0-4fd3-ae04-4be0924d1c51 (resolvconf) was prepared for execution. 2025-06-02 17:10:14.853431 | orchestrator | 2025-06-02 17:10:14 | INFO  | It takes a moment until task 4bd54665-03f0-4fd3-ae04-4be0924d1c51 (resolvconf) has been started and output is visible here. 2025-06-02 17:10:18.939428 | orchestrator | 2025-06-02 17:10:18.939542 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-06-02 17:10:18.939560 | orchestrator | 2025-06-02 17:10:18.942181 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 17:10:18.943346 | orchestrator | Monday 02 June 2025 17:10:18 +0000 (0:00:00.163) 0:00:00.163 *********** 2025-06-02 17:10:22.927914 | orchestrator | ok: [testbed-manager] 2025-06-02 17:10:22.928042 | orchestrator | 2025-06-02 17:10:22.928977 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-06-02 17:10:22.929005 | orchestrator | Monday 02 June 2025 17:10:22 +0000 (0:00:03.997) 0:00:04.161 *********** 2025-06-02 17:10:22.995808 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:10:22.996401 | orchestrator | 2025-06-02 17:10:22.997045 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-06-02 17:10:22.997518 | orchestrator | Monday 02 June 2025 17:10:22 +0000 (0:00:00.069) 0:00:04.230 *********** 2025-06-02 17:10:23.078246 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-06-02 17:10:23.079825 | orchestrator | 2025-06-02 17:10:23.080909 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-06-02 17:10:23.081399 | orchestrator | Monday 02 June 2025 17:10:23 +0000 (0:00:00.081) 0:00:04.312 *********** 2025-06-02 17:10:23.163748 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-06-02 17:10:23.164363 | orchestrator | 2025-06-02 17:10:23.165248 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-06-02 17:10:23.165994 | orchestrator | Monday 02 June 2025 17:10:23 +0000 (0:00:00.085) 0:00:04.397 *********** 2025-06-02 17:10:24.339496 | orchestrator | ok: [testbed-manager] 2025-06-02 17:10:24.339615 | orchestrator | 2025-06-02 17:10:24.339843 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-06-02 17:10:24.339944 | orchestrator | Monday 02 June 2025 17:10:24 +0000 (0:00:01.174) 0:00:05.572 *********** 2025-06-02 17:10:24.392210 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:10:24.392660 | orchestrator | 2025-06-02 17:10:24.393201 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-06-02 17:10:24.393694 | orchestrator | Monday 02 June 2025 17:10:24 +0000 (0:00:00.054) 0:00:05.626 *********** 2025-06-02 17:10:24.924688 | orchestrator | ok: [testbed-manager] 2025-06-02 17:10:24.924791 | orchestrator | 2025-06-02 17:10:24.925627 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-06-02 17:10:24.926708 | orchestrator | Monday 02 June 2025 17:10:24 +0000 (0:00:00.532) 0:00:06.158 *********** 2025-06-02 17:10:25.006443 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:10:25.006797 | orchestrator | 2025-06-02 17:10:25.007957 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-06-02 17:10:25.008426 | orchestrator | Monday 02 June 2025 17:10:24 +0000 (0:00:00.080) 0:00:06.238 *********** 2025-06-02 17:10:25.546983 | orchestrator | changed: [testbed-manager] 2025-06-02 17:10:25.547831 | orchestrator | 2025-06-02 17:10:25.548883 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-06-02 17:10:25.550493 | orchestrator | Monday 02 June 2025 17:10:25 +0000 (0:00:00.542) 0:00:06.780 *********** 2025-06-02 17:10:26.693962 | orchestrator | changed: [testbed-manager] 2025-06-02 17:10:26.694679 | orchestrator | 2025-06-02 17:10:26.695551 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-06-02 17:10:26.696614 | orchestrator | Monday 02 June 2025 17:10:26 +0000 (0:00:01.144) 0:00:07.925 *********** 2025-06-02 17:10:27.687163 | orchestrator | ok: [testbed-manager] 2025-06-02 17:10:27.689384 | orchestrator | 2025-06-02 17:10:27.690385 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-06-02 17:10:27.691089 | orchestrator | Monday 02 June 2025 17:10:27 +0000 (0:00:00.993) 0:00:08.919 *********** 2025-06-02 17:10:27.768459 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-06-02 17:10:27.769091 | orchestrator | 2025-06-02 17:10:27.770374 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-06-02 17:10:27.771237 | orchestrator | Monday 02 June 2025 17:10:27 +0000 (0:00:00.083) 0:00:09.002 *********** 2025-06-02 17:10:28.927674 | orchestrator | changed: [testbed-manager] 2025-06-02 17:10:28.927784 | orchestrator | 2025-06-02 17:10:28.928835 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:10:28.929242 | orchestrator | 2025-06-02 17:10:28 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 17:10:28.929636 | orchestrator | 2025-06-02 17:10:28 | INFO  | Please wait and do not abort execution. 2025-06-02 17:10:28.930622 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 17:10:28.931594 | orchestrator | 2025-06-02 17:10:28.932201 | orchestrator | 2025-06-02 17:10:28.933075 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:10:28.933706 | orchestrator | Monday 02 June 2025 17:10:28 +0000 (0:00:01.158) 0:00:10.161 *********** 2025-06-02 17:10:28.934475 | orchestrator | =============================================================================== 2025-06-02 17:10:28.934950 | orchestrator | Gathering Facts --------------------------------------------------------- 4.00s 2025-06-02 17:10:28.936226 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.17s 2025-06-02 17:10:28.936692 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.16s 2025-06-02 17:10:28.937510 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.14s 2025-06-02 17:10:28.938575 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.99s 2025-06-02 17:10:28.938984 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.54s 2025-06-02 17:10:28.939537 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.53s 2025-06-02 17:10:28.940242 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2025-06-02 17:10:28.940359 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2025-06-02 17:10:28.941021 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2025-06-02 17:10:28.941624 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-06-02 17:10:28.942001 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2025-06-02 17:10:28.942262 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.05s 2025-06-02 17:10:29.474895 | orchestrator | + osism apply sshconfig 2025-06-02 17:10:31.206243 | orchestrator | Registering Redlock._acquired_script 2025-06-02 17:10:31.206409 | orchestrator | Registering Redlock._extend_script 2025-06-02 17:10:31.206426 | orchestrator | Registering Redlock._release_script 2025-06-02 17:10:31.282716 | orchestrator | 2025-06-02 17:10:31 | INFO  | Task 3312c29b-5595-4fa9-91d3-c58ec9d6028a (sshconfig) was prepared for execution. 2025-06-02 17:10:31.282837 | orchestrator | 2025-06-02 17:10:31 | INFO  | It takes a moment until task 3312c29b-5595-4fa9-91d3-c58ec9d6028a (sshconfig) has been started and output is visible here. 2025-06-02 17:10:35.360742 | orchestrator | 2025-06-02 17:10:35.362122 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-06-02 17:10:35.363022 | orchestrator | 2025-06-02 17:10:35.363966 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-06-02 17:10:35.366367 | orchestrator | Monday 02 June 2025 17:10:35 +0000 (0:00:00.180) 0:00:00.180 *********** 2025-06-02 17:10:35.932422 | orchestrator | ok: [testbed-manager] 2025-06-02 17:10:35.932974 | orchestrator | 2025-06-02 17:10:35.934386 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-06-02 17:10:35.935345 | orchestrator | Monday 02 June 2025 17:10:35 +0000 (0:00:00.574) 0:00:00.755 *********** 2025-06-02 17:10:36.472074 | orchestrator | changed: [testbed-manager] 2025-06-02 17:10:36.472649 | orchestrator | 2025-06-02 17:10:36.473998 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-06-02 17:10:36.474454 | orchestrator | Monday 02 June 2025 17:10:36 +0000 (0:00:00.539) 0:00:01.294 *********** 2025-06-02 17:10:42.427736 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-06-02 17:10:42.428169 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-06-02 17:10:42.429746 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-06-02 17:10:42.430754 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-06-02 17:10:42.432265 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-06-02 17:10:42.433131 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-06-02 17:10:42.435473 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-06-02 17:10:42.436534 | orchestrator | 2025-06-02 17:10:42.437586 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-06-02 17:10:42.438484 | orchestrator | Monday 02 June 2025 17:10:42 +0000 (0:00:05.955) 0:00:07.249 *********** 2025-06-02 17:10:42.498611 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:10:42.499583 | orchestrator | 2025-06-02 17:10:42.500580 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-06-02 17:10:42.501100 | orchestrator | Monday 02 June 2025 17:10:42 +0000 (0:00:00.071) 0:00:07.321 *********** 2025-06-02 17:10:43.077370 | orchestrator | changed: [testbed-manager] 2025-06-02 17:10:43.077982 | orchestrator | 2025-06-02 17:10:43.078345 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:10:43.079708 | orchestrator | 2025-06-02 17:10:43 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 17:10:43.080003 | orchestrator | 2025-06-02 17:10:43 | INFO  | Please wait and do not abort execution. 2025-06-02 17:10:43.081206 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 17:10:43.082539 | orchestrator | 2025-06-02 17:10:43.083722 | orchestrator | 2025-06-02 17:10:43.084890 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:10:43.085695 | orchestrator | Monday 02 June 2025 17:10:43 +0000 (0:00:00.578) 0:00:07.900 *********** 2025-06-02 17:10:43.086175 | orchestrator | =============================================================================== 2025-06-02 17:10:43.086886 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.96s 2025-06-02 17:10:43.087746 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.58s 2025-06-02 17:10:43.088133 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.57s 2025-06-02 17:10:43.088729 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.54s 2025-06-02 17:10:43.089511 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2025-06-02 17:10:43.606096 | orchestrator | + osism apply known-hosts 2025-06-02 17:10:45.284019 | orchestrator | Registering Redlock._acquired_script 2025-06-02 17:10:45.284136 | orchestrator | Registering Redlock._extend_script 2025-06-02 17:10:45.284155 | orchestrator | Registering Redlock._release_script 2025-06-02 17:10:45.345914 | orchestrator | 2025-06-02 17:10:45 | INFO  | Task 005fbdee-5c2c-427a-9b68-b390c9eea892 (known-hosts) was prepared for execution. 2025-06-02 17:10:45.346005 | orchestrator | 2025-06-02 17:10:45 | INFO  | It takes a moment until task 005fbdee-5c2c-427a-9b68-b390c9eea892 (known-hosts) has been started and output is visible here. 2025-06-02 17:10:49.494612 | orchestrator | 2025-06-02 17:10:49.494727 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-06-02 17:10:49.495360 | orchestrator | 2025-06-02 17:10:49.496421 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-06-02 17:10:49.496958 | orchestrator | Monday 02 June 2025 17:10:49 +0000 (0:00:00.196) 0:00:00.196 *********** 2025-06-02 17:10:55.679585 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-06-02 17:10:55.679707 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-06-02 17:10:55.680447 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-06-02 17:10:55.681387 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-06-02 17:10:55.682103 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-06-02 17:10:55.682981 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-06-02 17:10:55.683471 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-06-02 17:10:55.683953 | orchestrator | 2025-06-02 17:10:55.684922 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-06-02 17:10:55.685563 | orchestrator | Monday 02 June 2025 17:10:55 +0000 (0:00:06.186) 0:00:06.383 *********** 2025-06-02 17:10:55.871621 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-06-02 17:10:55.871770 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-06-02 17:10:55.871786 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-06-02 17:10:55.873731 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-06-02 17:10:55.874685 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-06-02 17:10:55.875446 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-06-02 17:10:55.875540 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-06-02 17:10:55.876035 | orchestrator | 2025-06-02 17:10:55.876574 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 17:10:55.876998 | orchestrator | Monday 02 June 2025 17:10:55 +0000 (0:00:00.192) 0:00:06.575 *********** 2025-06-02 17:10:57.069178 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBORGInkK7NMc1qTUW1Y4Wyeem7yrESG+WHVM+w/yG12UekU85jdwrSAqCsWP8f/WWlofVc0xflZo53hyRjn1hMg=) 2025-06-02 17:10:57.069808 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBPtgqNH8rEiVDHsXu6eQknQ1V/vxfagtRpBzaRrMUBx) 2025-06-02 17:10:57.071232 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDGZCP5WSeLSOGGsR9RzUnP/VHpp4ssPPZKOObObunD4fHUlUMTi1rQ/+6I7ZqZq7oMXNprzbTwOqDwRlqiGaVHs1O+XdVcYBfotTPmpN9WIibTPfhU1T2rqHJLkm8j4krXHbNKcFXU+k30vhG88vFI3zbcmbaFI8uwl/FbpbqDsE29gjoXzDypeGEgQ3OQPqX8Cie41nlw78OQeQm6dtLaW+1HL/RKMoo5XsnrdmqGr0b31WJ0nyivlS0SI0t14QjTz7zlNEM2piAvP9DbexEwbYvYoJcZJiczkig2ysSwBKXzxg21Z+ndm9oTmyMG3vnhvcDvBEb9nJ0H0i2KrpG6pG/GPRODn/P5VAruZ785ytQFpW5GM2M3gFqzMDzJR7Jj17l5DORjX2Xgg6VoczGsvnk4krQDMxPsFfWLVZrRi8vIsB5G+lOxtTr4xDthOnbFWCJIgs4ccC/hLBxfDqvxhiDwE6E00hMa/YFC+mCq+jqxcD7n/xsVQIiTtZ0G4z8=) 2025-06-02 17:10:57.072695 | orchestrator | 2025-06-02 17:10:57.074163 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 17:10:57.074907 | orchestrator | Monday 02 June 2025 17:10:57 +0000 (0:00:01.197) 0:00:07.773 *********** 2025-06-02 17:10:58.181550 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCdsBZ9ZFI5bVCgMfu654Eik5d4WLowk4KRtLVUYRAvnaXuM2Pb9zm2Mlpf8XSrEpfxv3b/1geyIEI0Lz81Z4ixaXTJVx0KgXstOYHcFwbLQYxgqJWs4fpHLcL6/6dFtL0ZwFAS1BioifWBrTSjK8aQiIiSHbJU1l2EDhOaI9CX4d+6R8p6hh6YlZeBPa018CtNNCGrZf55FY44l3A7vXmbZFm6G/K8S18M/tfrO/Ta8gAVjbcp9Oa5v3105a9kXngKgbNgEeoGibie6JJK2BSgFDBFbZ0Vhb+n+R12SE9KKIqZVVTdvtiWk1rG0tEAMKEj6Qh1lqBFMTn17csAP9bfs4najus1QlZPav9ah6io+E06kbNF/3QC9V6O1eVzb0wbVNCqewggz9qMn2IaVte6Yo0iWOzNb6hwqWQb/uXUXotByYhCuxoygWN1pvPAZS0dT2oOIvZDgv5lTCy3lAAORfITiESG7TPwJx8hV1dyuT9sq+PbzcIJBHmt8J78OvM=) 2025-06-02 17:10:58.181661 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE7IukytFRw+8dLv9f7dHAHZWTDz3ckPkRluFpnAaiSvlzd0jynxuEHX3sG5Zs4sy5/KAbQCYVAQ9RPwzmVtSXU=) 2025-06-02 17:10:58.181677 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOTe6/N8dqKVqllbhG6+6FQEmLtNNBcIbUvnxYpWT1x/) 2025-06-02 17:10:58.182188 | orchestrator | 2025-06-02 17:10:58.182637 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 17:10:58.183041 | orchestrator | Monday 02 June 2025 17:10:58 +0000 (0:00:01.112) 0:00:08.886 *********** 2025-06-02 17:10:59.418715 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC6oU1gt5sZmYG/XvQxW0lvJHrOi2ZVS6fTHmyryfJ0gEEUU0Nd3GHt645nhAtx8Mcpy4r8gfkg2asjxqRYMepL59od3VoHW5Mg+WSI7gMfat7PDIZ4ZfvHrjbk8flKz3tee1FeKq40GVTll1GoSSkKIx6RoPa/7r3a0BOu7uqLZG/XzN9fQltZ7sFku3m+3L/gkXz9Hc/fb8ngD6GgZTrJYu/qYn+fAJhQ/Qkc/Cab1VLDvF1WLet3ovxwCk/yEodRZdSBSBeCMiF0SdTWsvgSuwPX5mFK5uNRIIgvDAo/SxNduqwSaTAtLxkTL3pawHMZ+C4BOMDuC1yFbY6YEs+HxhSeQoBcKI+mq1s0kZFtwR/kimC2A2CUc1fI4CPgAEj9nM0L4UeGa6NiNg0Q0m0YkvPT5TVf3kJD6HvWFmjEI9nX5dIgzbTHz3I7heYFuikKpQMekWFCT7xPuZLexB+tiFjSJi3i2e8xUe0GrtqJYjUGUdC8ipcc2vJj5tVisgM=) 2025-06-02 17:10:59.419118 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEujn2nALpVcF7F2N7n9rTgdFxZIf1aq8PpufgsVGHlZ) 2025-06-02 17:10:59.419908 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBnq7rAfYzHzg2TlcfWtFmakUQOu3mGXWJh1UAZhNb75uCtp6Xjw/vBk2T4I3yv6BUy+nrH/oZe9uxDbGl5ZY2c=) 2025-06-02 17:10:59.421166 | orchestrator | 2025-06-02 17:10:59.422860 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 17:10:59.424071 | orchestrator | Monday 02 June 2025 17:10:59 +0000 (0:00:01.237) 0:00:10.124 *********** 2025-06-02 17:11:00.565908 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCx/qqluusW54gG0Og53zZOqLC4WfUfJ9eA1WPSxtJpoiPVRhcrwlThaI3POM+P10daxHc0+FVh/mgc3QYrdCGIjWdxLoiMp9dMU4haEsPlinCZobfKb4hhgWfgKb8Sau/RkV3h1OHJuzAMOKXy3+3zapeZctKzbWmj1BUnLlJNlDrKrpyDa6kx2vXaeEpEygElS98Pmmh+6jzCXjmc+fDkAb1/DNieBFLOBdwwrIwr121aWi14SLhGprCQ4+UEjshnpqHLEz8DP9Ows+UiitAgljM0hRs2v2VW3Tw9Ve178+f1LWQwhGnCgCSWDDpY28SM10B35uxv6ZzxYAE1oednzirhfKNc124H2t4geeOCPFXUi7XOk8xB6cPkok/mVdsGqWMIXhCxmNH/5XQZS88aOwVHIcxDC7oMbOHbZN34Z4CBZz8nAk8xNYlRFRhkQdXB/GDE/XwW35alz6j/LgUv3GWAEAedWG7ZpiYYtyPm69jixXTmDvuhKh1NSySPg1E=) 2025-06-02 17:11:00.566249 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJ7P7UOpca1UpJn+qgt/LWwQYFJP5eS2s83XYlFL5w9+J5OsZowAFtrvUtOTCUEWCiLXehCLkF7aq58dHTo3K+k=) 2025-06-02 17:11:00.566279 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILG/ddr7GgxxrubjVj9eMeoMHhrE0vDbz7Cwi0ReBNLI) 2025-06-02 17:11:00.566365 | orchestrator | 2025-06-02 17:11:00.566482 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 17:11:00.566501 | orchestrator | Monday 02 June 2025 17:11:00 +0000 (0:00:01.146) 0:00:11.270 *********** 2025-06-02 17:11:01.695537 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3at3tYLAvgrsB8jqmSc2UveuALKYk0v7VsvUgodQtMnKzA4SCSRLjhll12CEpkX5wWxfayH5HqXQOniZYxPiU1yb8TWXQQ8aK1Ct4ymZ1ZPtNh3VMgo6j0D6wPCl1Yxe+3OqbVpxSfZLVS1V7T48OO7Dm+/THVgnn4cCFVZz+IpXU92q3nm91v1NCu0PN6g/+9vpbMER/08DCyWikEebTIo7prOVuy97s0dBIJyEzuKuAHPPlzHv3XTZrXGRoUC4S+J6SfZKkbmZj1k04OVl2rfzOL+mATv5We+O0GKEf0ZZibUUZWgYuUTw5VBeiZmzaZI93SOPIgY2K0SkvdyZe2DffExld2KPwJK40NNpxHrvqxgBUFTgv24SNQEVlbW2BcUmQPYIgtFxhnLTqaEtgWUbI50Fow7uUgvC0N+leU2xmfK3qktTKh/1Rrs/AlauW4hNlIhfIKzWO2E4SmuvUzwjh6mBTsqalFZgOq56mDr05AYVPOQtdbrClrBypCsU=) 2025-06-02 17:11:01.695964 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHeTwv4Wn79IHYbfLLw34MuZ8KyKhGZpjylRqC1oK+SAhCwW0DJk0nOZMOFgTdOsyyJOd8xLlWzn7JUwdsJ6rzA=) 2025-06-02 17:11:01.696400 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIH+9BsaXfJs+CQiO7/LTnpSTchgq0uWXTSEU6kH+14aZ) 2025-06-02 17:11:01.697491 | orchestrator | 2025-06-02 17:11:01.698380 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 17:11:01.699200 | orchestrator | Monday 02 June 2025 17:11:01 +0000 (0:00:01.129) 0:00:12.400 *********** 2025-06-02 17:11:02.908581 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOLPhhp/vKfC4d/tapGF30qVaHJcwFTuOmdrU39JwE1nDd8mgZuQmm43/6fYht0apQLX948jNNSelOyw2lStuQM=) 2025-06-02 17:11:02.909456 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFFl4gIcwlkq6e4DGjsKODZJlLRaIiH7mZN5fFL15bxS) 2025-06-02 17:11:02.910838 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCzGGdVSEu6dRAXcJgwPJV6Vr2Rlu+Sv0MlDInQTejYF2/6no5NQoBIh9YyL3IHjIQd99oI+yiktuyq2uMVwOVP3pvjWcXK6eosQ77j/UruUDgqRiPl4XjO7FzYdkJ2GGfoR3Y/fJrtIunZLO1r6atlomn7Txl24/S2/cldryJC2snZnfzwtfxGxzjgcHWEBSxf3tDBBoXztTEpUUJ+gChsixotCDloAHd9z5PfeXAQrGs0b2vS6c2T+M7sPmdxeOfqHE+vu4qypYKQKKisgpgSALyjhijV5bWOvnlEUYZiil0lwyDB5oSIc4m161Ry/hAyVuqwnyUEWq/tviQZH3gkjKUzkEGP9YSsdFd4R89yUYN0Tx5NsqyS5IRRO9L55kcXXc7Zp3E9f7L9yQQxk9i3gkxL008qsbqubp7BYYy8OKyKeUDok03f6yMOwKH58vGpeuIEcshw+KGS+GPUOmlMU7x8egJAH29Ols2h+sAWqo+3iQ2dlcmhGx2dHwItIXs=) 2025-06-02 17:11:02.911723 | orchestrator | 2025-06-02 17:11:02.912765 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 17:11:02.913649 | orchestrator | Monday 02 June 2025 17:11:02 +0000 (0:00:01.213) 0:00:13.613 *********** 2025-06-02 17:11:04.041715 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCue1FkwFGx34X3hwlOGHNeRBEwEENTVLVwaGFh5TgSK1QXoECJ5iU41eFovJ3eUfGroCTuRam6UUZplM4R/WqGOByE4Fvry/NuXgIlRBSohElMrRNm3mCmm/D+V2Df1bmTkYxJSYBBzj/DL1BWw/s83TRg2XMjxeiIfqoSJW8IvnAPo/7sz0Zf0w7h5R1dFLHWmDk1RjY8szjsRjcgmRv0diFTGQC3JcXBtRrv3sElp8ZL1+xH+5LhRdU2shpKT/TSs3Qk/69lYra8U1boXFVU3gShrwTjneQThQU7gk5eKwMk/ByqqUQNmJeWx+MZDmTVGpFHJUfSiP/w8TAumS71brVwdWQ2xCMNoIYrns2jTdub9QdHmz3KTWY2ZkDyQU2O9yN1jeiL88FnOHm2Wygx+k0BrZyaUZ/fEjJSszF+b1kpsbNVOY6Fh6pqr3+qRUmIoqSCIq+jRMNAnF7heSZzfm6luNvcF+osQ9kOu5Pk+uXi+mMvrcPSJJr5fyPhaGs=) 2025-06-02 17:11:04.042452 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGtBA6x53CZ2cvCN272UsvldJ5XXxe4g3lS63TkfFz1v31uJ7uC7bQ0/5hDLkxbh8xqzf0awVDb+f8caNiHiVks=) 2025-06-02 17:11:04.043005 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBLh0cY4nIZIBdSFC8jIeTwQoi9UTGVGQpTfGZewxNkU) 2025-06-02 17:11:04.044475 | orchestrator | 2025-06-02 17:11:04.045843 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-06-02 17:11:04.046471 | orchestrator | Monday 02 June 2025 17:11:04 +0000 (0:00:01.134) 0:00:14.747 *********** 2025-06-02 17:11:09.500039 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-06-02 17:11:09.500572 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-06-02 17:11:09.502246 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-06-02 17:11:09.503456 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-06-02 17:11:09.503770 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-06-02 17:11:09.504281 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-06-02 17:11:09.505108 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-06-02 17:11:09.506821 | orchestrator | 2025-06-02 17:11:09.510353 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-06-02 17:11:09.512510 | orchestrator | Monday 02 June 2025 17:11:09 +0000 (0:00:05.457) 0:00:20.204 *********** 2025-06-02 17:11:09.673891 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-06-02 17:11:09.674148 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-06-02 17:11:09.674170 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-06-02 17:11:09.675068 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-06-02 17:11:09.675246 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-06-02 17:11:09.676413 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-06-02 17:11:09.677400 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-06-02 17:11:09.677536 | orchestrator | 2025-06-02 17:11:09.677870 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 17:11:09.678398 | orchestrator | Monday 02 June 2025 17:11:09 +0000 (0:00:00.176) 0:00:20.381 *********** 2025-06-02 17:11:10.766997 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBPtgqNH8rEiVDHsXu6eQknQ1V/vxfagtRpBzaRrMUBx) 2025-06-02 17:11:10.767492 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDGZCP5WSeLSOGGsR9RzUnP/VHpp4ssPPZKOObObunD4fHUlUMTi1rQ/+6I7ZqZq7oMXNprzbTwOqDwRlqiGaVHs1O+XdVcYBfotTPmpN9WIibTPfhU1T2rqHJLkm8j4krXHbNKcFXU+k30vhG88vFI3zbcmbaFI8uwl/FbpbqDsE29gjoXzDypeGEgQ3OQPqX8Cie41nlw78OQeQm6dtLaW+1HL/RKMoo5XsnrdmqGr0b31WJ0nyivlS0SI0t14QjTz7zlNEM2piAvP9DbexEwbYvYoJcZJiczkig2ysSwBKXzxg21Z+ndm9oTmyMG3vnhvcDvBEb9nJ0H0i2KrpG6pG/GPRODn/P5VAruZ785ytQFpW5GM2M3gFqzMDzJR7Jj17l5DORjX2Xgg6VoczGsvnk4krQDMxPsFfWLVZrRi8vIsB5G+lOxtTr4xDthOnbFWCJIgs4ccC/hLBxfDqvxhiDwE6E00hMa/YFC+mCq+jqxcD7n/xsVQIiTtZ0G4z8=) 2025-06-02 17:11:10.768446 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBORGInkK7NMc1qTUW1Y4Wyeem7yrESG+WHVM+w/yG12UekU85jdwrSAqCsWP8f/WWlofVc0xflZo53hyRjn1hMg=) 2025-06-02 17:11:10.769227 | orchestrator | 2025-06-02 17:11:10.769572 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 17:11:10.770230 | orchestrator | Monday 02 June 2025 17:11:10 +0000 (0:00:01.091) 0:00:21.472 *********** 2025-06-02 17:11:11.881848 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOTe6/N8dqKVqllbhG6+6FQEmLtNNBcIbUvnxYpWT1x/) 2025-06-02 17:11:11.883548 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCdsBZ9ZFI5bVCgMfu654Eik5d4WLowk4KRtLVUYRAvnaXuM2Pb9zm2Mlpf8XSrEpfxv3b/1geyIEI0Lz81Z4ixaXTJVx0KgXstOYHcFwbLQYxgqJWs4fpHLcL6/6dFtL0ZwFAS1BioifWBrTSjK8aQiIiSHbJU1l2EDhOaI9CX4d+6R8p6hh6YlZeBPa018CtNNCGrZf55FY44l3A7vXmbZFm6G/K8S18M/tfrO/Ta8gAVjbcp9Oa5v3105a9kXngKgbNgEeoGibie6JJK2BSgFDBFbZ0Vhb+n+R12SE9KKIqZVVTdvtiWk1rG0tEAMKEj6Qh1lqBFMTn17csAP9bfs4najus1QlZPav9ah6io+E06kbNF/3QC9V6O1eVzb0wbVNCqewggz9qMn2IaVte6Yo0iWOzNb6hwqWQb/uXUXotByYhCuxoygWN1pvPAZS0dT2oOIvZDgv5lTCy3lAAORfITiESG7TPwJx8hV1dyuT9sq+PbzcIJBHmt8J78OvM=) 2025-06-02 17:11:11.883787 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE7IukytFRw+8dLv9f7dHAHZWTDz3ckPkRluFpnAaiSvlzd0jynxuEHX3sG5Zs4sy5/KAbQCYVAQ9RPwzmVtSXU=) 2025-06-02 17:11:11.884768 | orchestrator | 2025-06-02 17:11:11.885383 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 17:11:11.886126 | orchestrator | Monday 02 June 2025 17:11:11 +0000 (0:00:01.114) 0:00:22.586 *********** 2025-06-02 17:11:12.983600 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEujn2nALpVcF7F2N7n9rTgdFxZIf1aq8PpufgsVGHlZ) 2025-06-02 17:11:12.985777 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC6oU1gt5sZmYG/XvQxW0lvJHrOi2ZVS6fTHmyryfJ0gEEUU0Nd3GHt645nhAtx8Mcpy4r8gfkg2asjxqRYMepL59od3VoHW5Mg+WSI7gMfat7PDIZ4ZfvHrjbk8flKz3tee1FeKq40GVTll1GoSSkKIx6RoPa/7r3a0BOu7uqLZG/XzN9fQltZ7sFku3m+3L/gkXz9Hc/fb8ngD6GgZTrJYu/qYn+fAJhQ/Qkc/Cab1VLDvF1WLet3ovxwCk/yEodRZdSBSBeCMiF0SdTWsvgSuwPX5mFK5uNRIIgvDAo/SxNduqwSaTAtLxkTL3pawHMZ+C4BOMDuC1yFbY6YEs+HxhSeQoBcKI+mq1s0kZFtwR/kimC2A2CUc1fI4CPgAEj9nM0L4UeGa6NiNg0Q0m0YkvPT5TVf3kJD6HvWFmjEI9nX5dIgzbTHz3I7heYFuikKpQMekWFCT7xPuZLexB+tiFjSJi3i2e8xUe0GrtqJYjUGUdC8ipcc2vJj5tVisgM=) 2025-06-02 17:11:12.986793 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBnq7rAfYzHzg2TlcfWtFmakUQOu3mGXWJh1UAZhNb75uCtp6Xjw/vBk2T4I3yv6BUy+nrH/oZe9uxDbGl5ZY2c=) 2025-06-02 17:11:12.987480 | orchestrator | 2025-06-02 17:11:12.988342 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 17:11:12.988820 | orchestrator | Monday 02 June 2025 17:11:12 +0000 (0:00:01.103) 0:00:23.689 *********** 2025-06-02 17:11:14.096645 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJ7P7UOpca1UpJn+qgt/LWwQYFJP5eS2s83XYlFL5w9+J5OsZowAFtrvUtOTCUEWCiLXehCLkF7aq58dHTo3K+k=) 2025-06-02 17:11:14.097499 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCx/qqluusW54gG0Og53zZOqLC4WfUfJ9eA1WPSxtJpoiPVRhcrwlThaI3POM+P10daxHc0+FVh/mgc3QYrdCGIjWdxLoiMp9dMU4haEsPlinCZobfKb4hhgWfgKb8Sau/RkV3h1OHJuzAMOKXy3+3zapeZctKzbWmj1BUnLlJNlDrKrpyDa6kx2vXaeEpEygElS98Pmmh+6jzCXjmc+fDkAb1/DNieBFLOBdwwrIwr121aWi14SLhGprCQ4+UEjshnpqHLEz8DP9Ows+UiitAgljM0hRs2v2VW3Tw9Ve178+f1LWQwhGnCgCSWDDpY28SM10B35uxv6ZzxYAE1oednzirhfKNc124H2t4geeOCPFXUi7XOk8xB6cPkok/mVdsGqWMIXhCxmNH/5XQZS88aOwVHIcxDC7oMbOHbZN34Z4CBZz8nAk8xNYlRFRhkQdXB/GDE/XwW35alz6j/LgUv3GWAEAedWG7ZpiYYtyPm69jixXTmDvuhKh1NSySPg1E=) 2025-06-02 17:11:14.098337 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILG/ddr7GgxxrubjVj9eMeoMHhrE0vDbz7Cwi0ReBNLI) 2025-06-02 17:11:14.099038 | orchestrator | 2025-06-02 17:11:14.099871 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 17:11:14.100588 | orchestrator | Monday 02 June 2025 17:11:14 +0000 (0:00:01.112) 0:00:24.802 *********** 2025-06-02 17:11:15.218454 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3at3tYLAvgrsB8jqmSc2UveuALKYk0v7VsvUgodQtMnKzA4SCSRLjhll12CEpkX5wWxfayH5HqXQOniZYxPiU1yb8TWXQQ8aK1Ct4ymZ1ZPtNh3VMgo6j0D6wPCl1Yxe+3OqbVpxSfZLVS1V7T48OO7Dm+/THVgnn4cCFVZz+IpXU92q3nm91v1NCu0PN6g/+9vpbMER/08DCyWikEebTIo7prOVuy97s0dBIJyEzuKuAHPPlzHv3XTZrXGRoUC4S+J6SfZKkbmZj1k04OVl2rfzOL+mATv5We+O0GKEf0ZZibUUZWgYuUTw5VBeiZmzaZI93SOPIgY2K0SkvdyZe2DffExld2KPwJK40NNpxHrvqxgBUFTgv24SNQEVlbW2BcUmQPYIgtFxhnLTqaEtgWUbI50Fow7uUgvC0N+leU2xmfK3qktTKh/1Rrs/AlauW4hNlIhfIKzWO2E4SmuvUzwjh6mBTsqalFZgOq56mDr05AYVPOQtdbrClrBypCsU=) 2025-06-02 17:11:15.219127 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHeTwv4Wn79IHYbfLLw34MuZ8KyKhGZpjylRqC1oK+SAhCwW0DJk0nOZMOFgTdOsyyJOd8xLlWzn7JUwdsJ6rzA=) 2025-06-02 17:11:15.220127 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIH+9BsaXfJs+CQiO7/LTnpSTchgq0uWXTSEU6kH+14aZ) 2025-06-02 17:11:15.220471 | orchestrator | 2025-06-02 17:11:15.221541 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 17:11:15.221846 | orchestrator | Monday 02 June 2025 17:11:15 +0000 (0:00:01.122) 0:00:25.924 *********** 2025-06-02 17:11:16.298201 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOLPhhp/vKfC4d/tapGF30qVaHJcwFTuOmdrU39JwE1nDd8mgZuQmm43/6fYht0apQLX948jNNSelOyw2lStuQM=) 2025-06-02 17:11:16.299000 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCzGGdVSEu6dRAXcJgwPJV6Vr2Rlu+Sv0MlDInQTejYF2/6no5NQoBIh9YyL3IHjIQd99oI+yiktuyq2uMVwOVP3pvjWcXK6eosQ77j/UruUDgqRiPl4XjO7FzYdkJ2GGfoR3Y/fJrtIunZLO1r6atlomn7Txl24/S2/cldryJC2snZnfzwtfxGxzjgcHWEBSxf3tDBBoXztTEpUUJ+gChsixotCDloAHd9z5PfeXAQrGs0b2vS6c2T+M7sPmdxeOfqHE+vu4qypYKQKKisgpgSALyjhijV5bWOvnlEUYZiil0lwyDB5oSIc4m161Ry/hAyVuqwnyUEWq/tviQZH3gkjKUzkEGP9YSsdFd4R89yUYN0Tx5NsqyS5IRRO9L55kcXXc7Zp3E9f7L9yQQxk9i3gkxL008qsbqubp7BYYy8OKyKeUDok03f6yMOwKH58vGpeuIEcshw+KGS+GPUOmlMU7x8egJAH29Ols2h+sAWqo+3iQ2dlcmhGx2dHwItIXs=) 2025-06-02 17:11:16.299823 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFFl4gIcwlkq6e4DGjsKODZJlLRaIiH7mZN5fFL15bxS) 2025-06-02 17:11:16.300843 | orchestrator | 2025-06-02 17:11:16.301624 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 17:11:16.302057 | orchestrator | Monday 02 June 2025 17:11:16 +0000 (0:00:01.079) 0:00:27.004 *********** 2025-06-02 17:11:17.386855 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCue1FkwFGx34X3hwlOGHNeRBEwEENTVLVwaGFh5TgSK1QXoECJ5iU41eFovJ3eUfGroCTuRam6UUZplM4R/WqGOByE4Fvry/NuXgIlRBSohElMrRNm3mCmm/D+V2Df1bmTkYxJSYBBzj/DL1BWw/s83TRg2XMjxeiIfqoSJW8IvnAPo/7sz0Zf0w7h5R1dFLHWmDk1RjY8szjsRjcgmRv0diFTGQC3JcXBtRrv3sElp8ZL1+xH+5LhRdU2shpKT/TSs3Qk/69lYra8U1boXFVU3gShrwTjneQThQU7gk5eKwMk/ByqqUQNmJeWx+MZDmTVGpFHJUfSiP/w8TAumS71brVwdWQ2xCMNoIYrns2jTdub9QdHmz3KTWY2ZkDyQU2O9yN1jeiL88FnOHm2Wygx+k0BrZyaUZ/fEjJSszF+b1kpsbNVOY6Fh6pqr3+qRUmIoqSCIq+jRMNAnF7heSZzfm6luNvcF+osQ9kOu5Pk+uXi+mMvrcPSJJr5fyPhaGs=) 2025-06-02 17:11:17.387021 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGtBA6x53CZ2cvCN272UsvldJ5XXxe4g3lS63TkfFz1v31uJ7uC7bQ0/5hDLkxbh8xqzf0awVDb+f8caNiHiVks=) 2025-06-02 17:11:17.388022 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBLh0cY4nIZIBdSFC8jIeTwQoi9UTGVGQpTfGZewxNkU) 2025-06-02 17:11:17.388416 | orchestrator | 2025-06-02 17:11:17.388676 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-06-02 17:11:17.389091 | orchestrator | Monday 02 June 2025 17:11:17 +0000 (0:00:01.087) 0:00:28.092 *********** 2025-06-02 17:11:17.550007 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-06-02 17:11:17.550467 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-06-02 17:11:17.551037 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-06-02 17:11:17.551607 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-06-02 17:11:17.552168 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-02 17:11:17.552879 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-06-02 17:11:17.553375 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-06-02 17:11:17.554787 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:11:17.555663 | orchestrator | 2025-06-02 17:11:17.556433 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-06-02 17:11:17.556824 | orchestrator | Monday 02 June 2025 17:11:17 +0000 (0:00:00.161) 0:00:28.253 *********** 2025-06-02 17:11:17.599419 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:11:17.599909 | orchestrator | 2025-06-02 17:11:17.600673 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-06-02 17:11:17.601498 | orchestrator | Monday 02 June 2025 17:11:17 +0000 (0:00:00.053) 0:00:28.307 *********** 2025-06-02 17:11:17.653101 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:11:17.653165 | orchestrator | 2025-06-02 17:11:17.653896 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-06-02 17:11:17.654523 | orchestrator | Monday 02 June 2025 17:11:17 +0000 (0:00:00.052) 0:00:28.360 *********** 2025-06-02 17:11:18.308993 | orchestrator | changed: [testbed-manager] 2025-06-02 17:11:18.309700 | orchestrator | 2025-06-02 17:11:18.310168 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:11:18.311076 | orchestrator | 2025-06-02 17:11:18 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 17:11:18.311108 | orchestrator | 2025-06-02 17:11:18 | INFO  | Please wait and do not abort execution. 2025-06-02 17:11:18.312090 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 17:11:18.312642 | orchestrator | 2025-06-02 17:11:18.313619 | orchestrator | 2025-06-02 17:11:18.314733 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:11:18.315050 | orchestrator | Monday 02 June 2025 17:11:18 +0000 (0:00:00.656) 0:00:29.016 *********** 2025-06-02 17:11:18.315453 | orchestrator | =============================================================================== 2025-06-02 17:11:18.316041 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.19s 2025-06-02 17:11:18.316688 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.46s 2025-06-02 17:11:18.317416 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.24s 2025-06-02 17:11:18.317740 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.21s 2025-06-02 17:11:18.318682 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.20s 2025-06-02 17:11:18.320377 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2025-06-02 17:11:18.321400 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2025-06-02 17:11:18.322225 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2025-06-02 17:11:18.323212 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2025-06-02 17:11:18.323957 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-06-02 17:11:18.325135 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-06-02 17:11:18.325578 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-06-02 17:11:18.326501 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-06-02 17:11:18.326944 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-06-02 17:11:18.327265 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-06-02 17:11:18.328011 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-06-02 17:11:18.328426 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.66s 2025-06-02 17:11:18.328801 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.19s 2025-06-02 17:11:18.329525 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.18s 2025-06-02 17:11:18.329947 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.16s 2025-06-02 17:11:18.878101 | orchestrator | + osism apply squid 2025-06-02 17:11:20.668481 | orchestrator | Registering Redlock._acquired_script 2025-06-02 17:11:20.668593 | orchestrator | Registering Redlock._extend_script 2025-06-02 17:11:20.668608 | orchestrator | Registering Redlock._release_script 2025-06-02 17:11:20.731279 | orchestrator | 2025-06-02 17:11:20 | INFO  | Task 9d7898f7-45de-41b5-a6af-55de2a2cbc48 (squid) was prepared for execution. 2025-06-02 17:11:20.731404 | orchestrator | 2025-06-02 17:11:20 | INFO  | It takes a moment until task 9d7898f7-45de-41b5-a6af-55de2a2cbc48 (squid) has been started and output is visible here. 2025-06-02 17:11:24.751094 | orchestrator | 2025-06-02 17:11:24.751167 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-06-02 17:11:24.751423 | orchestrator | 2025-06-02 17:11:24.751903 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-06-02 17:11:24.752039 | orchestrator | Monday 02 June 2025 17:11:24 +0000 (0:00:00.177) 0:00:00.177 *********** 2025-06-02 17:11:24.850734 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-06-02 17:11:24.850837 | orchestrator | 2025-06-02 17:11:24.851227 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-06-02 17:11:24.851787 | orchestrator | Monday 02 June 2025 17:11:24 +0000 (0:00:00.101) 0:00:00.279 *********** 2025-06-02 17:11:26.129784 | orchestrator | ok: [testbed-manager] 2025-06-02 17:11:26.130365 | orchestrator | 2025-06-02 17:11:26.130939 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-06-02 17:11:26.131656 | orchestrator | Monday 02 June 2025 17:11:26 +0000 (0:00:01.278) 0:00:01.557 *********** 2025-06-02 17:11:27.204379 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-06-02 17:11:27.204487 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-06-02 17:11:27.204862 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-06-02 17:11:27.205671 | orchestrator | 2025-06-02 17:11:27.205957 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-06-02 17:11:27.206661 | orchestrator | Monday 02 June 2025 17:11:27 +0000 (0:00:01.071) 0:00:02.629 *********** 2025-06-02 17:11:28.209378 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-06-02 17:11:28.209471 | orchestrator | 2025-06-02 17:11:28.209486 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-06-02 17:11:28.209955 | orchestrator | Monday 02 June 2025 17:11:28 +0000 (0:00:01.006) 0:00:03.636 *********** 2025-06-02 17:11:28.582981 | orchestrator | ok: [testbed-manager] 2025-06-02 17:11:28.583148 | orchestrator | 2025-06-02 17:11:28.583168 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-06-02 17:11:28.583489 | orchestrator | Monday 02 June 2025 17:11:28 +0000 (0:00:00.374) 0:00:04.011 *********** 2025-06-02 17:11:29.539256 | orchestrator | changed: [testbed-manager] 2025-06-02 17:11:29.539622 | orchestrator | 2025-06-02 17:11:29.541682 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-06-02 17:11:29.542695 | orchestrator | Monday 02 June 2025 17:11:29 +0000 (0:00:00.954) 0:00:04.965 *********** 2025-06-02 17:12:02.000395 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-06-02 17:12:02.000520 | orchestrator | ok: [testbed-manager] 2025-06-02 17:12:02.000747 | orchestrator | 2025-06-02 17:12:02.002760 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-06-02 17:12:02.003944 | orchestrator | Monday 02 June 2025 17:12:01 +0000 (0:00:32.456) 0:00:37.422 *********** 2025-06-02 17:12:14.257340 | orchestrator | changed: [testbed-manager] 2025-06-02 17:12:14.257662 | orchestrator | 2025-06-02 17:12:14.297907 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-06-02 17:12:14.297996 | orchestrator | Monday 02 June 2025 17:12:14 +0000 (0:00:12.259) 0:00:49.681 *********** 2025-06-02 17:13:14.336690 | orchestrator | Pausing for 60 seconds 2025-06-02 17:13:14.336832 | orchestrator | changed: [testbed-manager] 2025-06-02 17:13:14.337106 | orchestrator | 2025-06-02 17:13:14.338968 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-06-02 17:13:14.339614 | orchestrator | Monday 02 June 2025 17:13:14 +0000 (0:01:00.080) 0:01:49.762 *********** 2025-06-02 17:13:14.391958 | orchestrator | ok: [testbed-manager] 2025-06-02 17:13:14.392150 | orchestrator | 2025-06-02 17:13:14.392911 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-06-02 17:13:14.393738 | orchestrator | Monday 02 June 2025 17:13:14 +0000 (0:00:00.058) 0:01:49.820 *********** 2025-06-02 17:13:14.948209 | orchestrator | changed: [testbed-manager] 2025-06-02 17:13:14.949184 | orchestrator | 2025-06-02 17:13:14.949299 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:13:14.949819 | orchestrator | 2025-06-02 17:13:14 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 17:13:14.950348 | orchestrator | 2025-06-02 17:13:14 | INFO  | Please wait and do not abort execution. 2025-06-02 17:13:14.950771 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:13:14.951208 | orchestrator | 2025-06-02 17:13:14.952109 | orchestrator | 2025-06-02 17:13:14.952881 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:13:14.954085 | orchestrator | Monday 02 June 2025 17:13:14 +0000 (0:00:00.556) 0:01:50.377 *********** 2025-06-02 17:13:14.954245 | orchestrator | =============================================================================== 2025-06-02 17:13:14.954933 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2025-06-02 17:13:14.956184 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 32.46s 2025-06-02 17:13:14.956452 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.26s 2025-06-02 17:13:14.957393 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.28s 2025-06-02 17:13:14.958084 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.07s 2025-06-02 17:13:14.958246 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.01s 2025-06-02 17:13:14.958841 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.95s 2025-06-02 17:13:14.959921 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.56s 2025-06-02 17:13:14.961137 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.37s 2025-06-02 17:13:14.961571 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.10s 2025-06-02 17:13:14.962374 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2025-06-02 17:13:15.312758 | orchestrator | + [[ 9.1.0 != \l\a\t\e\s\t ]] 2025-06-02 17:13:15.312858 | orchestrator | + sed -i 's#docker_namespace: kolla#docker_namespace: kolla/release#' /opt/configuration/inventory/group_vars/all/kolla.yml 2025-06-02 17:13:15.315912 | orchestrator | ++ semver 9.1.0 9.0.0 2025-06-02 17:13:15.379570 | orchestrator | + [[ 1 -lt 0 ]] 2025-06-02 17:13:15.380715 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-06-02 17:13:16.933845 | orchestrator | Registering Redlock._acquired_script 2025-06-02 17:13:16.933947 | orchestrator | Registering Redlock._extend_script 2025-06-02 17:13:16.933962 | orchestrator | Registering Redlock._release_script 2025-06-02 17:13:16.984069 | orchestrator | 2025-06-02 17:13:16 | INFO  | Task c10403a8-8f07-4b8e-92ad-13c4e7c6827f (operator) was prepared for execution. 2025-06-02 17:13:16.984178 | orchestrator | 2025-06-02 17:13:16 | INFO  | It takes a moment until task c10403a8-8f07-4b8e-92ad-13c4e7c6827f (operator) has been started and output is visible here. 2025-06-02 17:13:20.909283 | orchestrator | 2025-06-02 17:13:20.909976 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-06-02 17:13:20.910937 | orchestrator | 2025-06-02 17:13:20.912074 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 17:13:20.912688 | orchestrator | Monday 02 June 2025 17:13:20 +0000 (0:00:00.158) 0:00:00.158 *********** 2025-06-02 17:13:24.318215 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:13:24.319587 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:13:24.321693 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:13:24.322709 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:13:24.323722 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:13:24.326140 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:13:24.326479 | orchestrator | 2025-06-02 17:13:24.327277 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-06-02 17:13:24.331276 | orchestrator | Monday 02 June 2025 17:13:24 +0000 (0:00:03.410) 0:00:03.569 *********** 2025-06-02 17:13:25.164673 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:13:25.164783 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:13:25.164805 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:13:25.166702 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:13:25.166875 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:13:25.170306 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:13:25.171633 | orchestrator | 2025-06-02 17:13:25.173098 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-06-02 17:13:25.174109 | orchestrator | 2025-06-02 17:13:25.175251 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-06-02 17:13:25.176389 | orchestrator | Monday 02 June 2025 17:13:25 +0000 (0:00:00.844) 0:00:04.413 *********** 2025-06-02 17:13:25.278782 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:13:25.307253 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:13:25.337874 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:13:25.385167 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:13:25.385921 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:13:25.386381 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:13:25.387248 | orchestrator | 2025-06-02 17:13:25.390802 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-06-02 17:13:25.390833 | orchestrator | Monday 02 June 2025 17:13:25 +0000 (0:00:00.224) 0:00:04.638 *********** 2025-06-02 17:13:25.452586 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:13:25.476078 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:13:25.502148 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:13:25.551122 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:13:25.552235 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:13:25.552540 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:13:25.553141 | orchestrator | 2025-06-02 17:13:25.553684 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-06-02 17:13:25.554611 | orchestrator | Monday 02 June 2025 17:13:25 +0000 (0:00:00.166) 0:00:04.805 *********** 2025-06-02 17:13:26.151532 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:13:26.152515 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:13:26.153626 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:13:26.154638 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:13:26.155238 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:13:26.155977 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:13:26.156635 | orchestrator | 2025-06-02 17:13:26.157829 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-06-02 17:13:26.158757 | orchestrator | Monday 02 June 2025 17:13:26 +0000 (0:00:00.598) 0:00:05.403 *********** 2025-06-02 17:13:26.981411 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:13:26.981522 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:13:26.981611 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:13:26.981977 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:13:26.982619 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:13:26.983137 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:13:26.985714 | orchestrator | 2025-06-02 17:13:26.986806 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-06-02 17:13:26.987508 | orchestrator | Monday 02 June 2025 17:13:26 +0000 (0:00:00.829) 0:00:06.233 *********** 2025-06-02 17:13:28.197223 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-06-02 17:13:28.197399 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-06-02 17:13:28.197589 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-06-02 17:13:28.198229 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-06-02 17:13:28.198763 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-06-02 17:13:28.199059 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-06-02 17:13:28.200391 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-06-02 17:13:28.200417 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-06-02 17:13:28.201196 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-06-02 17:13:28.201646 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-06-02 17:13:28.202780 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-06-02 17:13:28.203701 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-06-02 17:13:28.205608 | orchestrator | 2025-06-02 17:13:28.206275 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-06-02 17:13:28.206842 | orchestrator | Monday 02 June 2025 17:13:28 +0000 (0:00:01.214) 0:00:07.447 *********** 2025-06-02 17:13:29.554400 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:13:29.556183 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:13:29.557488 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:13:29.560479 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:13:29.564719 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:13:29.565409 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:13:29.566218 | orchestrator | 2025-06-02 17:13:29.569305 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-06-02 17:13:29.570858 | orchestrator | Monday 02 June 2025 17:13:29 +0000 (0:00:01.357) 0:00:08.805 *********** 2025-06-02 17:13:30.770370 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-06-02 17:13:30.771078 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-06-02 17:13:30.771875 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-06-02 17:13:30.844952 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-06-02 17:13:30.846394 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-06-02 17:13:30.847666 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-06-02 17:13:30.848353 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-06-02 17:13:30.849517 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-06-02 17:13:30.850566 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-06-02 17:13:30.851522 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-06-02 17:13:30.852730 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-06-02 17:13:30.853789 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-06-02 17:13:30.854797 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-06-02 17:13:30.857850 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-06-02 17:13:30.857873 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-06-02 17:13:30.857885 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-06-02 17:13:30.857896 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-06-02 17:13:30.858623 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-06-02 17:13:30.860208 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-06-02 17:13:30.860842 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-06-02 17:13:30.861886 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-06-02 17:13:30.862582 | orchestrator | 2025-06-02 17:13:30.863641 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-06-02 17:13:30.863961 | orchestrator | Monday 02 June 2025 17:13:30 +0000 (0:00:01.293) 0:00:10.098 *********** 2025-06-02 17:13:31.431739 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:13:31.432030 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:13:31.432985 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:13:31.438160 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:13:31.438194 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:13:31.438206 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:13:31.438217 | orchestrator | 2025-06-02 17:13:31.438230 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-06-02 17:13:31.441043 | orchestrator | Monday 02 June 2025 17:13:31 +0000 (0:00:00.586) 0:00:10.684 *********** 2025-06-02 17:13:31.511405 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:13:31.535195 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:13:31.558010 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:13:31.612978 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:13:31.613153 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:13:31.613513 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:13:31.613640 | orchestrator | 2025-06-02 17:13:31.613958 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-06-02 17:13:31.614191 | orchestrator | Monday 02 June 2025 17:13:31 +0000 (0:00:00.181) 0:00:10.865 *********** 2025-06-02 17:13:32.340750 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-06-02 17:13:32.341650 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:13:32.342830 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-02 17:13:32.343872 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-02 17:13:32.344900 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:13:32.345929 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:13:32.346723 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-06-02 17:13:32.347591 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:13:32.348405 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 17:13:32.349009 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:13:32.349796 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-02 17:13:32.350669 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:13:32.351416 | orchestrator | 2025-06-02 17:13:32.352132 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-06-02 17:13:32.352833 | orchestrator | Monday 02 June 2025 17:13:32 +0000 (0:00:00.728) 0:00:11.593 *********** 2025-06-02 17:13:32.408145 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:13:32.433519 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:13:32.456677 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:13:32.535571 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:13:32.536840 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:13:32.539397 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:13:32.540062 | orchestrator | 2025-06-02 17:13:32.541419 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-06-02 17:13:32.542658 | orchestrator | Monday 02 June 2025 17:13:32 +0000 (0:00:00.193) 0:00:11.787 *********** 2025-06-02 17:13:32.623083 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:13:32.648051 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:13:32.671587 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:13:32.707597 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:13:32.708714 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:13:32.709462 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:13:32.710297 | orchestrator | 2025-06-02 17:13:32.712789 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-06-02 17:13:32.712817 | orchestrator | Monday 02 June 2025 17:13:32 +0000 (0:00:00.172) 0:00:11.960 *********** 2025-06-02 17:13:32.759713 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:13:32.794596 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:13:32.822472 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:13:32.887133 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:13:32.887743 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:13:32.888156 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:13:32.888774 | orchestrator | 2025-06-02 17:13:32.890647 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-06-02 17:13:32.891060 | orchestrator | Monday 02 June 2025 17:13:32 +0000 (0:00:00.179) 0:00:12.140 *********** 2025-06-02 17:13:33.557520 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:13:33.558540 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:13:33.560082 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:13:33.561118 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:13:33.561954 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:13:33.563059 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:13:33.564079 | orchestrator | 2025-06-02 17:13:33.564941 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-06-02 17:13:33.565966 | orchestrator | Monday 02 June 2025 17:13:33 +0000 (0:00:00.669) 0:00:12.810 *********** 2025-06-02 17:13:33.651003 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:13:33.678465 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:13:33.710217 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:13:33.825290 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:13:33.826730 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:13:33.828186 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:13:33.830656 | orchestrator | 2025-06-02 17:13:33.831406 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:13:33.831996 | orchestrator | 2025-06-02 17:13:33 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 17:13:33.832241 | orchestrator | 2025-06-02 17:13:33 | INFO  | Please wait and do not abort execution. 2025-06-02 17:13:33.833377 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 17:13:33.834143 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 17:13:33.834617 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 17:13:33.835204 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 17:13:33.835709 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 17:13:33.836484 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 17:13:33.836958 | orchestrator | 2025-06-02 17:13:33.837694 | orchestrator | 2025-06-02 17:13:33.838109 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:13:33.838706 | orchestrator | Monday 02 June 2025 17:13:33 +0000 (0:00:00.267) 0:00:13.077 *********** 2025-06-02 17:13:33.839152 | orchestrator | =============================================================================== 2025-06-02 17:13:33.839629 | orchestrator | Gathering Facts --------------------------------------------------------- 3.41s 2025-06-02 17:13:33.840038 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.36s 2025-06-02 17:13:33.840672 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.29s 2025-06-02 17:13:33.841145 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.21s 2025-06-02 17:13:33.841657 | orchestrator | Do not require tty for all users ---------------------------------------- 0.84s 2025-06-02 17:13:33.843076 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.83s 2025-06-02 17:13:33.843376 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.73s 2025-06-02 17:13:33.845021 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.67s 2025-06-02 17:13:33.845045 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.60s 2025-06-02 17:13:33.848766 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.59s 2025-06-02 17:13:33.848801 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.27s 2025-06-02 17:13:33.850657 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.22s 2025-06-02 17:13:33.850746 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.19s 2025-06-02 17:13:33.850761 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.18s 2025-06-02 17:13:33.850772 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.18s 2025-06-02 17:13:33.850783 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.17s 2025-06-02 17:13:33.850794 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.17s 2025-06-02 17:13:34.394228 | orchestrator | + osism apply --environment custom facts 2025-06-02 17:13:36.110985 | orchestrator | 2025-06-02 17:13:36 | INFO  | Trying to run play facts in environment custom 2025-06-02 17:13:36.115397 | orchestrator | Registering Redlock._acquired_script 2025-06-02 17:13:36.115452 | orchestrator | Registering Redlock._extend_script 2025-06-02 17:13:36.115465 | orchestrator | Registering Redlock._release_script 2025-06-02 17:13:36.183416 | orchestrator | 2025-06-02 17:13:36 | INFO  | Task 26b0a9fb-9f38-4d84-9890-014c659dcb66 (facts) was prepared for execution. 2025-06-02 17:13:36.183504 | orchestrator | 2025-06-02 17:13:36 | INFO  | It takes a moment until task 26b0a9fb-9f38-4d84-9890-014c659dcb66 (facts) has been started and output is visible here. 2025-06-02 17:13:40.139704 | orchestrator | 2025-06-02 17:13:40.139816 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-06-02 17:13:40.140796 | orchestrator | 2025-06-02 17:13:40.141878 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-02 17:13:40.144566 | orchestrator | Monday 02 June 2025 17:13:40 +0000 (0:00:00.090) 0:00:00.090 *********** 2025-06-02 17:13:41.725849 | orchestrator | ok: [testbed-manager] 2025-06-02 17:13:41.726076 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:13:41.726295 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:13:41.727634 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:13:41.730366 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:13:41.730530 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:13:41.731932 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:13:41.732431 | orchestrator | 2025-06-02 17:13:41.733005 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-06-02 17:13:41.733133 | orchestrator | Monday 02 June 2025 17:13:41 +0000 (0:00:01.585) 0:00:01.676 *********** 2025-06-02 17:13:42.983584 | orchestrator | ok: [testbed-manager] 2025-06-02 17:13:42.983693 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:13:42.983709 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:13:42.983720 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:13:42.984513 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:13:42.989553 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:13:42.990152 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:13:42.990762 | orchestrator | 2025-06-02 17:13:42.991082 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-06-02 17:13:42.991774 | orchestrator | 2025-06-02 17:13:42.992901 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-02 17:13:42.996359 | orchestrator | Monday 02 June 2025 17:13:42 +0000 (0:00:01.259) 0:00:02.935 *********** 2025-06-02 17:13:43.097844 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:13:43.098274 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:13:43.099287 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:13:43.101115 | orchestrator | 2025-06-02 17:13:43.102112 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-02 17:13:43.107136 | orchestrator | Monday 02 June 2025 17:13:43 +0000 (0:00:00.117) 0:00:03.053 *********** 2025-06-02 17:13:43.361402 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:13:43.364348 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:13:43.365306 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:13:43.366468 | orchestrator | 2025-06-02 17:13:43.367508 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-02 17:13:43.368387 | orchestrator | Monday 02 June 2025 17:13:43 +0000 (0:00:00.260) 0:00:03.313 *********** 2025-06-02 17:13:43.559571 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:13:43.562470 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:13:43.564078 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:13:43.566309 | orchestrator | 2025-06-02 17:13:43.566985 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-02 17:13:43.568795 | orchestrator | Monday 02 June 2025 17:13:43 +0000 (0:00:00.200) 0:00:03.513 *********** 2025-06-02 17:13:43.719384 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:13:43.720360 | orchestrator | 2025-06-02 17:13:43.720928 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-02 17:13:43.722363 | orchestrator | Monday 02 June 2025 17:13:43 +0000 (0:00:00.161) 0:00:03.674 *********** 2025-06-02 17:13:44.152693 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:13:44.156182 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:13:44.156218 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:13:44.156231 | orchestrator | 2025-06-02 17:13:44.157075 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-02 17:13:44.157939 | orchestrator | Monday 02 June 2025 17:13:44 +0000 (0:00:00.430) 0:00:04.104 *********** 2025-06-02 17:13:44.275415 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:13:44.276946 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:13:44.278999 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:13:44.280525 | orchestrator | 2025-06-02 17:13:44.281826 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-02 17:13:44.282740 | orchestrator | Monday 02 June 2025 17:13:44 +0000 (0:00:00.122) 0:00:04.227 *********** 2025-06-02 17:13:45.350618 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:13:45.350728 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:13:45.350744 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:13:45.350757 | orchestrator | 2025-06-02 17:13:45.350788 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-02 17:13:45.351350 | orchestrator | Monday 02 June 2025 17:13:45 +0000 (0:00:01.071) 0:00:05.299 *********** 2025-06-02 17:13:45.812019 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:13:45.814208 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:13:45.815739 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:13:45.816393 | orchestrator | 2025-06-02 17:13:45.817986 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-02 17:13:45.818695 | orchestrator | Monday 02 June 2025 17:13:45 +0000 (0:00:00.466) 0:00:05.765 *********** 2025-06-02 17:13:46.884287 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:13:46.884956 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:13:46.885800 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:13:46.886667 | orchestrator | 2025-06-02 17:13:46.887469 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-02 17:13:46.888201 | orchestrator | Monday 02 June 2025 17:13:46 +0000 (0:00:01.073) 0:00:06.839 *********** 2025-06-02 17:14:00.371640 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:14:00.371764 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:14:00.371781 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:14:00.371795 | orchestrator | 2025-06-02 17:14:00.371808 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-06-02 17:14:00.371821 | orchestrator | Monday 02 June 2025 17:14:00 +0000 (0:00:13.483) 0:00:20.322 *********** 2025-06-02 17:14:00.502825 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:14:00.503434 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:14:00.503775 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:14:00.504536 | orchestrator | 2025-06-02 17:14:00.505630 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-06-02 17:14:00.506982 | orchestrator | Monday 02 June 2025 17:14:00 +0000 (0:00:00.133) 0:00:20.456 *********** 2025-06-02 17:14:07.430856 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:14:07.431265 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:14:07.434316 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:14:07.434446 | orchestrator | 2025-06-02 17:14:07.435150 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-02 17:14:07.436061 | orchestrator | Monday 02 June 2025 17:14:07 +0000 (0:00:06.927) 0:00:27.384 *********** 2025-06-02 17:14:08.870850 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:14:08.870958 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:14:08.871736 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:14:08.871760 | orchestrator | 2025-06-02 17:14:08.873515 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-06-02 17:14:08.873544 | orchestrator | Monday 02 June 2025 17:14:08 +0000 (0:00:01.440) 0:00:28.824 *********** 2025-06-02 17:14:12.445229 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-06-02 17:14:12.449502 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-06-02 17:14:12.451184 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-06-02 17:14:12.452710 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-06-02 17:14:12.454093 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-06-02 17:14:12.454990 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-06-02 17:14:12.455721 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-06-02 17:14:12.456472 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-06-02 17:14:12.456988 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-06-02 17:14:12.457836 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-06-02 17:14:12.458288 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-06-02 17:14:12.459307 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-06-02 17:14:12.459925 | orchestrator | 2025-06-02 17:14:12.460372 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-02 17:14:12.461180 | orchestrator | Monday 02 June 2025 17:14:12 +0000 (0:00:03.572) 0:00:32.397 *********** 2025-06-02 17:14:13.629300 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:14:13.630753 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:14:13.631881 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:14:13.635752 | orchestrator | 2025-06-02 17:14:13.635777 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-02 17:14:13.635792 | orchestrator | 2025-06-02 17:14:13.636165 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-02 17:14:13.636870 | orchestrator | Monday 02 June 2025 17:14:13 +0000 (0:00:01.183) 0:00:33.580 *********** 2025-06-02 17:14:17.513103 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:14:17.513500 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:14:17.514231 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:14:17.515347 | orchestrator | ok: [testbed-manager] 2025-06-02 17:14:17.516544 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:14:17.517777 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:14:17.518716 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:14:17.519511 | orchestrator | 2025-06-02 17:14:17.520649 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:14:17.521390 | orchestrator | 2025-06-02 17:14:17 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 17:14:17.521469 | orchestrator | 2025-06-02 17:14:17 | INFO  | Please wait and do not abort execution. 2025-06-02 17:14:17.522907 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:14:17.524150 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:14:17.524988 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:14:17.525795 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:14:17.526664 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:14:17.527803 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:14:17.528211 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:14:17.529053 | orchestrator | 2025-06-02 17:14:17.530121 | orchestrator | 2025-06-02 17:14:17.531344 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:14:17.532612 | orchestrator | Monday 02 June 2025 17:14:17 +0000 (0:00:03.886) 0:00:37.467 *********** 2025-06-02 17:14:17.533791 | orchestrator | =============================================================================== 2025-06-02 17:14:17.534786 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.48s 2025-06-02 17:14:17.535200 | orchestrator | Install required packages (Debian) -------------------------------------- 6.93s 2025-06-02 17:14:17.536368 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.89s 2025-06-02 17:14:17.536790 | orchestrator | Copy fact files --------------------------------------------------------- 3.57s 2025-06-02 17:14:17.537902 | orchestrator | Create custom facts directory ------------------------------------------- 1.59s 2025-06-02 17:14:17.538845 | orchestrator | Create custom facts directory ------------------------------------------- 1.44s 2025-06-02 17:14:17.539793 | orchestrator | Copy fact file ---------------------------------------------------------- 1.26s 2025-06-02 17:14:17.541100 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.18s 2025-06-02 17:14:17.542594 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.07s 2025-06-02 17:14:17.543466 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.07s 2025-06-02 17:14:17.543864 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.47s 2025-06-02 17:14:17.544618 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.43s 2025-06-02 17:14:17.545438 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.26s 2025-06-02 17:14:17.545775 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.20s 2025-06-02 17:14:17.546654 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.16s 2025-06-02 17:14:17.547381 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.13s 2025-06-02 17:14:17.547887 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.12s 2025-06-02 17:14:17.548695 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.12s 2025-06-02 17:14:18.123643 | orchestrator | + osism apply bootstrap 2025-06-02 17:14:19.852912 | orchestrator | Registering Redlock._acquired_script 2025-06-02 17:14:19.853024 | orchestrator | Registering Redlock._extend_script 2025-06-02 17:14:19.853039 | orchestrator | Registering Redlock._release_script 2025-06-02 17:14:19.911774 | orchestrator | 2025-06-02 17:14:19 | INFO  | Task 332dbbdd-66a1-445a-9313-b9cdc4321a51 (bootstrap) was prepared for execution. 2025-06-02 17:14:19.911868 | orchestrator | 2025-06-02 17:14:19 | INFO  | It takes a moment until task 332dbbdd-66a1-445a-9313-b9cdc4321a51 (bootstrap) has been started and output is visible here. 2025-06-02 17:14:24.171923 | orchestrator | 2025-06-02 17:14:24.174284 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-06-02 17:14:24.176550 | orchestrator | 2025-06-02 17:14:24.179771 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-06-02 17:14:24.180824 | orchestrator | Monday 02 June 2025 17:14:24 +0000 (0:00:00.168) 0:00:00.168 *********** 2025-06-02 17:14:24.253811 | orchestrator | ok: [testbed-manager] 2025-06-02 17:14:24.283466 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:14:24.311777 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:14:24.339278 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:14:24.432470 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:14:24.432664 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:14:24.433612 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:14:24.434640 | orchestrator | 2025-06-02 17:14:24.435103 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-02 17:14:24.435768 | orchestrator | 2025-06-02 17:14:24.436552 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-02 17:14:24.437068 | orchestrator | Monday 02 June 2025 17:14:24 +0000 (0:00:00.264) 0:00:00.433 *********** 2025-06-02 17:14:28.035645 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:14:28.037862 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:14:28.040695 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:14:28.040791 | orchestrator | ok: [testbed-manager] 2025-06-02 17:14:28.041672 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:14:28.042341 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:14:28.043244 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:14:28.044075 | orchestrator | 2025-06-02 17:14:28.044719 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-06-02 17:14:28.045520 | orchestrator | 2025-06-02 17:14:28.046140 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-02 17:14:28.046920 | orchestrator | Monday 02 June 2025 17:14:28 +0000 (0:00:03.600) 0:00:04.033 *********** 2025-06-02 17:14:28.134978 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-06-02 17:14:28.135201 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-06-02 17:14:28.183085 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-06-02 17:14:28.183271 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 17:14:28.185262 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-06-02 17:14:28.186145 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 17:14:28.186239 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-06-02 17:14:28.236739 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-06-02 17:14:28.242174 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 17:14:28.242602 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-06-02 17:14:28.243084 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-02 17:14:28.243480 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-06-02 17:14:28.244063 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-06-02 17:14:28.244753 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-06-02 17:14:28.246126 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-02 17:14:28.284119 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-06-02 17:14:28.284276 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-02 17:14:28.284393 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-06-02 17:14:28.285758 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-06-02 17:14:28.286225 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-06-02 17:14:28.287905 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-02 17:14:28.564599 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:14:28.565287 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-02 17:14:28.565894 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-06-02 17:14:28.566290 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:14:28.567031 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-06-02 17:14:28.567093 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-02 17:14:28.568133 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-06-02 17:14:28.568232 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-06-02 17:14:28.568912 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-02 17:14:28.569342 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:14:28.570209 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-06-02 17:14:28.570332 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-06-02 17:14:28.571171 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-02 17:14:28.572188 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-06-02 17:14:28.572274 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-06-02 17:14:28.573483 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-02 17:14:28.573970 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-02 17:14:28.574104 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-06-02 17:14:28.575092 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-06-02 17:14:28.575551 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-02 17:14:28.576384 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:14:28.576628 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-02 17:14:28.577464 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-06-02 17:14:28.578243 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-06-02 17:14:28.578753 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-02 17:14:28.579528 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-06-02 17:14:28.579667 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:14:28.580351 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-06-02 17:14:28.581121 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-06-02 17:14:28.581506 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:14:28.581993 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-06-02 17:14:28.582416 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-06-02 17:14:28.583176 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-06-02 17:14:28.583502 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-06-02 17:14:28.584044 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:14:28.584720 | orchestrator | 2025-06-02 17:14:28.585162 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-06-02 17:14:28.585867 | orchestrator | 2025-06-02 17:14:28.586714 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-06-02 17:14:28.587241 | orchestrator | Monday 02 June 2025 17:14:28 +0000 (0:00:00.530) 0:00:04.564 *********** 2025-06-02 17:14:29.837517 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:14:29.837711 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:14:29.838228 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:14:29.838703 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:14:29.839472 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:14:29.840130 | orchestrator | ok: [testbed-manager] 2025-06-02 17:14:29.842628 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:14:29.843460 | orchestrator | 2025-06-02 17:14:29.844794 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-06-02 17:14:29.846114 | orchestrator | Monday 02 June 2025 17:14:29 +0000 (0:00:01.272) 0:00:05.836 *********** 2025-06-02 17:14:31.133638 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:14:31.136433 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:14:31.137565 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:14:31.138885 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:14:31.140206 | orchestrator | ok: [testbed-manager] 2025-06-02 17:14:31.140898 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:14:31.141726 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:14:31.142564 | orchestrator | 2025-06-02 17:14:31.143218 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-06-02 17:14:31.143682 | orchestrator | Monday 02 June 2025 17:14:31 +0000 (0:00:01.292) 0:00:07.129 *********** 2025-06-02 17:14:31.426450 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:14:31.426559 | orchestrator | 2025-06-02 17:14:31.426577 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-06-02 17:14:31.426650 | orchestrator | Monday 02 June 2025 17:14:31 +0000 (0:00:00.294) 0:00:07.424 *********** 2025-06-02 17:14:33.486615 | orchestrator | changed: [testbed-manager] 2025-06-02 17:14:33.486802 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:14:33.490180 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:14:33.492084 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:14:33.492667 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:14:33.493278 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:14:33.493903 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:14:33.494423 | orchestrator | 2025-06-02 17:14:33.495120 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-06-02 17:14:33.495668 | orchestrator | Monday 02 June 2025 17:14:33 +0000 (0:00:02.060) 0:00:09.484 *********** 2025-06-02 17:14:33.560113 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:14:33.752778 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:14:33.752878 | orchestrator | 2025-06-02 17:14:33.753544 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-06-02 17:14:33.757063 | orchestrator | Monday 02 June 2025 17:14:33 +0000 (0:00:00.267) 0:00:09.751 *********** 2025-06-02 17:14:34.775248 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:14:34.776534 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:14:34.777078 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:14:34.778363 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:14:34.779123 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:14:34.780109 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:14:34.780967 | orchestrator | 2025-06-02 17:14:34.781745 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-06-02 17:14:34.782101 | orchestrator | Monday 02 June 2025 17:14:34 +0000 (0:00:01.019) 0:00:10.771 *********** 2025-06-02 17:14:34.827414 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:14:35.310012 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:14:35.311205 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:14:35.311272 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:14:35.312516 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:14:35.313284 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:14:35.314114 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:14:35.314841 | orchestrator | 2025-06-02 17:14:35.315520 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-06-02 17:14:35.316431 | orchestrator | Monday 02 June 2025 17:14:35 +0000 (0:00:00.536) 0:00:11.308 *********** 2025-06-02 17:14:35.414180 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:14:35.447962 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:14:35.471206 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:14:35.755147 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:14:35.755775 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:14:35.758697 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:14:35.760361 | orchestrator | ok: [testbed-manager] 2025-06-02 17:14:35.761730 | orchestrator | 2025-06-02 17:14:35.763119 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-06-02 17:14:35.764029 | orchestrator | Monday 02 June 2025 17:14:35 +0000 (0:00:00.442) 0:00:11.751 *********** 2025-06-02 17:14:35.831489 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:14:35.849923 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:14:35.878457 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:14:35.910814 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:14:36.001587 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:14:36.002822 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:14:36.007429 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:14:36.007461 | orchestrator | 2025-06-02 17:14:36.007647 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-06-02 17:14:36.009219 | orchestrator | Monday 02 June 2025 17:14:35 +0000 (0:00:00.249) 0:00:12.001 *********** 2025-06-02 17:14:36.300917 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:14:36.301487 | orchestrator | 2025-06-02 17:14:36.302711 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-06-02 17:14:36.303317 | orchestrator | Monday 02 June 2025 17:14:36 +0000 (0:00:00.298) 0:00:12.299 *********** 2025-06-02 17:14:36.620628 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:14:36.620992 | orchestrator | 2025-06-02 17:14:36.622754 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-06-02 17:14:36.623113 | orchestrator | Monday 02 June 2025 17:14:36 +0000 (0:00:00.317) 0:00:12.617 *********** 2025-06-02 17:14:37.923752 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:14:37.924507 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:14:37.925767 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:14:37.927413 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:14:37.927992 | orchestrator | ok: [testbed-manager] 2025-06-02 17:14:37.928878 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:14:37.930109 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:14:37.931893 | orchestrator | 2025-06-02 17:14:37.935530 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-06-02 17:14:37.936224 | orchestrator | Monday 02 June 2025 17:14:37 +0000 (0:00:01.304) 0:00:13.921 *********** 2025-06-02 17:14:38.020326 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:14:38.050598 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:14:38.078518 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:14:38.107342 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:14:38.163387 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:14:38.164527 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:14:38.165544 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:14:38.167072 | orchestrator | 2025-06-02 17:14:38.170689 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-06-02 17:14:38.170713 | orchestrator | Monday 02 June 2025 17:14:38 +0000 (0:00:00.241) 0:00:14.163 *********** 2025-06-02 17:14:38.698700 | orchestrator | ok: [testbed-manager] 2025-06-02 17:14:38.701081 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:14:38.701136 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:14:38.702351 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:14:38.703938 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:14:38.705400 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:14:38.706055 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:14:38.706834 | orchestrator | 2025-06-02 17:14:38.707923 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-06-02 17:14:38.708699 | orchestrator | Monday 02 June 2025 17:14:38 +0000 (0:00:00.532) 0:00:14.695 *********** 2025-06-02 17:14:38.807273 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:14:38.832903 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:14:38.865489 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:14:38.952676 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:14:38.954510 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:14:38.955202 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:14:38.956728 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:14:38.957351 | orchestrator | 2025-06-02 17:14:38.958411 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-06-02 17:14:38.959271 | orchestrator | Monday 02 June 2025 17:14:38 +0000 (0:00:00.256) 0:00:14.952 *********** 2025-06-02 17:14:39.531936 | orchestrator | ok: [testbed-manager] 2025-06-02 17:14:39.533229 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:14:39.535043 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:14:39.535956 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:14:39.538207 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:14:39.538645 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:14:39.539332 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:14:39.540148 | orchestrator | 2025-06-02 17:14:39.540899 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-06-02 17:14:39.541529 | orchestrator | Monday 02 June 2025 17:14:39 +0000 (0:00:00.579) 0:00:15.531 *********** 2025-06-02 17:14:40.655198 | orchestrator | ok: [testbed-manager] 2025-06-02 17:14:40.655445 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:14:40.655470 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:14:40.655680 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:14:40.661387 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:14:40.662189 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:14:40.665176 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:14:40.666256 | orchestrator | 2025-06-02 17:14:40.666367 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-06-02 17:14:40.667449 | orchestrator | Monday 02 June 2025 17:14:40 +0000 (0:00:01.119) 0:00:16.650 *********** 2025-06-02 17:14:42.781551 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:14:42.783624 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:14:42.784163 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:14:42.785285 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:14:42.787192 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:14:42.788167 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:14:42.788927 | orchestrator | ok: [testbed-manager] 2025-06-02 17:14:42.789737 | orchestrator | 2025-06-02 17:14:42.790472 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-06-02 17:14:42.791556 | orchestrator | Monday 02 June 2025 17:14:42 +0000 (0:00:02.127) 0:00:18.778 *********** 2025-06-02 17:14:43.210374 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:14:43.210868 | orchestrator | 2025-06-02 17:14:43.211831 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-06-02 17:14:43.212623 | orchestrator | Monday 02 June 2025 17:14:43 +0000 (0:00:00.430) 0:00:19.209 *********** 2025-06-02 17:14:43.293009 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:14:44.524533 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:14:44.525577 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:14:44.526724 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:14:44.528547 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:14:44.530169 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:14:44.531243 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:14:44.532576 | orchestrator | 2025-06-02 17:14:44.533437 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-02 17:14:44.534305 | orchestrator | Monday 02 June 2025 17:14:44 +0000 (0:00:01.311) 0:00:20.520 *********** 2025-06-02 17:14:44.600066 | orchestrator | ok: [testbed-manager] 2025-06-02 17:14:44.628046 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:14:44.654633 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:14:44.681504 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:14:44.758397 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:14:44.758609 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:14:44.759633 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:14:44.760942 | orchestrator | 2025-06-02 17:14:44.760979 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-02 17:14:44.761330 | orchestrator | Monday 02 June 2025 17:14:44 +0000 (0:00:00.236) 0:00:20.757 *********** 2025-06-02 17:14:44.834793 | orchestrator | ok: [testbed-manager] 2025-06-02 17:14:44.893032 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:14:44.918664 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:14:44.994883 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:14:44.995682 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:14:44.997057 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:14:44.997881 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:14:44.998402 | orchestrator | 2025-06-02 17:14:44.999008 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-02 17:14:44.999668 | orchestrator | Monday 02 June 2025 17:14:44 +0000 (0:00:00.236) 0:00:20.994 *********** 2025-06-02 17:14:45.073598 | orchestrator | ok: [testbed-manager] 2025-06-02 17:14:45.111004 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:14:45.137604 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:14:45.163693 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:14:45.232938 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:14:45.233385 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:14:45.234268 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:14:45.235105 | orchestrator | 2025-06-02 17:14:45.236222 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-02 17:14:45.236403 | orchestrator | Monday 02 June 2025 17:14:45 +0000 (0:00:00.237) 0:00:21.232 *********** 2025-06-02 17:14:45.534497 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:14:45.534600 | orchestrator | 2025-06-02 17:14:45.534991 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-02 17:14:45.538009 | orchestrator | Monday 02 June 2025 17:14:45 +0000 (0:00:00.300) 0:00:21.532 *********** 2025-06-02 17:14:46.083723 | orchestrator | ok: [testbed-manager] 2025-06-02 17:14:46.084638 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:14:46.084669 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:14:46.084925 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:14:46.087854 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:14:46.088090 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:14:46.089657 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:14:46.090841 | orchestrator | 2025-06-02 17:14:46.092049 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-02 17:14:46.093927 | orchestrator | Monday 02 June 2025 17:14:46 +0000 (0:00:00.547) 0:00:22.080 *********** 2025-06-02 17:14:46.168153 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:14:46.201402 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:14:46.222939 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:14:46.251068 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:14:46.318639 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:14:46.318804 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:14:46.318991 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:14:46.319765 | orchestrator | 2025-06-02 17:14:46.320118 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-02 17:14:46.320463 | orchestrator | Monday 02 June 2025 17:14:46 +0000 (0:00:00.238) 0:00:22.318 *********** 2025-06-02 17:14:47.414519 | orchestrator | ok: [testbed-manager] 2025-06-02 17:14:47.415472 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:14:47.417597 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:14:47.418544 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:14:47.420561 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:14:47.421989 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:14:47.423209 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:14:47.424203 | orchestrator | 2025-06-02 17:14:47.425271 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-02 17:14:47.426199 | orchestrator | Monday 02 June 2025 17:14:47 +0000 (0:00:01.092) 0:00:23.411 *********** 2025-06-02 17:14:48.063615 | orchestrator | ok: [testbed-manager] 2025-06-02 17:14:48.066139 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:14:48.066223 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:14:48.067482 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:14:48.068721 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:14:48.071712 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:14:48.073078 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:14:48.073189 | orchestrator | 2025-06-02 17:14:48.074685 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-02 17:14:48.075434 | orchestrator | Monday 02 June 2025 17:14:48 +0000 (0:00:00.650) 0:00:24.061 *********** 2025-06-02 17:14:49.179742 | orchestrator | ok: [testbed-manager] 2025-06-02 17:14:49.179946 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:14:49.181800 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:14:49.182650 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:14:49.183113 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:14:49.184113 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:14:49.185100 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:14:49.185406 | orchestrator | 2025-06-02 17:14:49.186317 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-02 17:14:49.186944 | orchestrator | Monday 02 June 2025 17:14:49 +0000 (0:00:01.116) 0:00:25.177 *********** 2025-06-02 17:15:02.863754 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:15:02.863874 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:15:02.864460 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:15:02.866827 | orchestrator | changed: [testbed-manager] 2025-06-02 17:15:02.868216 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:15:02.869183 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:15:02.870823 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:15:02.872713 | orchestrator | 2025-06-02 17:15:02.874205 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-06-02 17:15:02.875534 | orchestrator | Monday 02 June 2025 17:15:02 +0000 (0:00:13.682) 0:00:38.860 *********** 2025-06-02 17:15:02.946231 | orchestrator | ok: [testbed-manager] 2025-06-02 17:15:02.978556 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:15:03.003987 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:15:03.035328 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:15:03.108583 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:15:03.109862 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:15:03.110327 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:15:03.111372 | orchestrator | 2025-06-02 17:15:03.112913 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-06-02 17:15:03.114342 | orchestrator | Monday 02 June 2025 17:15:03 +0000 (0:00:00.247) 0:00:39.108 *********** 2025-06-02 17:15:03.193140 | orchestrator | ok: [testbed-manager] 2025-06-02 17:15:03.220585 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:15:03.253622 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:15:03.277330 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:15:03.350386 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:15:03.352526 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:15:03.353101 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:15:03.354379 | orchestrator | 2025-06-02 17:15:03.354719 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-06-02 17:15:03.355408 | orchestrator | Monday 02 June 2025 17:15:03 +0000 (0:00:00.241) 0:00:39.349 *********** 2025-06-02 17:15:03.437851 | orchestrator | ok: [testbed-manager] 2025-06-02 17:15:03.470715 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:15:03.508631 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:15:03.545915 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:15:03.617528 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:15:03.617699 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:15:03.618615 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:15:03.618931 | orchestrator | 2025-06-02 17:15:03.619770 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-06-02 17:15:03.620775 | orchestrator | Monday 02 June 2025 17:15:03 +0000 (0:00:00.266) 0:00:39.616 *********** 2025-06-02 17:15:03.911433 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:15:03.912970 | orchestrator | 2025-06-02 17:15:03.915636 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-06-02 17:15:03.916204 | orchestrator | Monday 02 June 2025 17:15:03 +0000 (0:00:00.292) 0:00:39.908 *********** 2025-06-02 17:15:05.371787 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:15:05.371962 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:15:05.373537 | orchestrator | ok: [testbed-manager] 2025-06-02 17:15:05.374667 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:15:05.375573 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:15:05.376414 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:15:05.377779 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:15:05.378123 | orchestrator | 2025-06-02 17:15:05.379072 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-06-02 17:15:05.381913 | orchestrator | Monday 02 June 2025 17:15:05 +0000 (0:00:01.460) 0:00:41.368 *********** 2025-06-02 17:15:06.425460 | orchestrator | changed: [testbed-manager] 2025-06-02 17:15:06.427992 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:15:06.428054 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:15:06.428874 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:15:06.430228 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:15:06.430458 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:15:06.431385 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:15:06.431881 | orchestrator | 2025-06-02 17:15:06.432770 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-06-02 17:15:06.433193 | orchestrator | Monday 02 June 2025 17:15:06 +0000 (0:00:01.054) 0:00:42.423 *********** 2025-06-02 17:15:07.227624 | orchestrator | ok: [testbed-manager] 2025-06-02 17:15:07.227730 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:15:07.227745 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:15:07.227757 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:15:07.229554 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:15:07.230160 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:15:07.231111 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:15:07.232599 | orchestrator | 2025-06-02 17:15:07.232892 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-06-02 17:15:07.233937 | orchestrator | Monday 02 June 2025 17:15:07 +0000 (0:00:00.798) 0:00:43.221 *********** 2025-06-02 17:15:07.540607 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:15:07.540771 | orchestrator | 2025-06-02 17:15:07.541331 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-06-02 17:15:07.544363 | orchestrator | Monday 02 June 2025 17:15:07 +0000 (0:00:00.317) 0:00:43.539 *********** 2025-06-02 17:15:08.547100 | orchestrator | changed: [testbed-manager] 2025-06-02 17:15:08.547250 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:15:08.551012 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:15:08.551050 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:15:08.551059 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:15:08.553006 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:15:08.553363 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:15:08.554483 | orchestrator | 2025-06-02 17:15:08.554873 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-06-02 17:15:08.555489 | orchestrator | Monday 02 June 2025 17:15:08 +0000 (0:00:01.004) 0:00:44.544 *********** 2025-06-02 17:15:08.627940 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:15:08.647705 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:15:08.675936 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:15:08.700728 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:15:08.867866 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:15:08.868326 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:15:08.869053 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:15:08.870055 | orchestrator | 2025-06-02 17:15:08.871207 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-06-02 17:15:08.873077 | orchestrator | Monday 02 June 2025 17:15:08 +0000 (0:00:00.323) 0:00:44.867 *********** 2025-06-02 17:15:21.705970 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:15:21.706132 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:15:21.706204 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:15:21.706665 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:15:21.707998 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:15:21.708428 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:15:21.709043 | orchestrator | changed: [testbed-manager] 2025-06-02 17:15:21.709540 | orchestrator | 2025-06-02 17:15:21.709772 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-06-02 17:15:21.710605 | orchestrator | Monday 02 June 2025 17:15:21 +0000 (0:00:12.835) 0:00:57.703 *********** 2025-06-02 17:15:22.665680 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:15:22.666994 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:15:22.668091 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:15:22.669481 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:15:22.671131 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:15:22.672134 | orchestrator | ok: [testbed-manager] 2025-06-02 17:15:22.673677 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:15:22.674097 | orchestrator | 2025-06-02 17:15:22.675641 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-06-02 17:15:22.676226 | orchestrator | Monday 02 June 2025 17:15:22 +0000 (0:00:00.961) 0:00:58.664 *********** 2025-06-02 17:15:23.542197 | orchestrator | ok: [testbed-manager] 2025-06-02 17:15:23.542341 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:15:23.542697 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:15:23.543699 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:15:23.544214 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:15:23.544994 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:15:23.546095 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:15:23.546487 | orchestrator | 2025-06-02 17:15:23.547179 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-06-02 17:15:23.547770 | orchestrator | Monday 02 June 2025 17:15:23 +0000 (0:00:00.873) 0:00:59.538 *********** 2025-06-02 17:15:23.635715 | orchestrator | ok: [testbed-manager] 2025-06-02 17:15:23.666557 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:15:23.698113 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:15:23.728820 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:15:23.809461 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:15:23.815354 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:15:23.815410 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:15:23.815423 | orchestrator | 2025-06-02 17:15:23.815437 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-06-02 17:15:23.815449 | orchestrator | Monday 02 June 2025 17:15:23 +0000 (0:00:00.267) 0:00:59.805 *********** 2025-06-02 17:15:23.887098 | orchestrator | ok: [testbed-manager] 2025-06-02 17:15:23.920351 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:15:23.950502 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:15:23.989589 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:15:24.063908 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:15:24.065960 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:15:24.066691 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:15:24.067674 | orchestrator | 2025-06-02 17:15:24.068784 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-06-02 17:15:24.069624 | orchestrator | Monday 02 June 2025 17:15:24 +0000 (0:00:00.257) 0:01:00.063 *********** 2025-06-02 17:15:24.382885 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:15:24.383744 | orchestrator | 2025-06-02 17:15:24.384728 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-06-02 17:15:24.385864 | orchestrator | Monday 02 June 2025 17:15:24 +0000 (0:00:00.319) 0:01:00.382 *********** 2025-06-02 17:15:25.867155 | orchestrator | ok: [testbed-manager] 2025-06-02 17:15:25.867354 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:15:25.868360 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:15:25.869240 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:15:25.870758 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:15:25.871815 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:15:25.872136 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:15:25.872950 | orchestrator | 2025-06-02 17:15:25.874308 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-06-02 17:15:25.874332 | orchestrator | Monday 02 June 2025 17:15:25 +0000 (0:00:01.481) 0:01:01.863 *********** 2025-06-02 17:15:26.447583 | orchestrator | changed: [testbed-manager] 2025-06-02 17:15:26.447694 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:15:26.449743 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:15:26.452133 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:15:26.452230 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:15:26.452887 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:15:26.453533 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:15:26.453932 | orchestrator | 2025-06-02 17:15:26.454443 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-06-02 17:15:26.454937 | orchestrator | Monday 02 June 2025 17:15:26 +0000 (0:00:00.581) 0:01:02.444 *********** 2025-06-02 17:15:26.531786 | orchestrator | ok: [testbed-manager] 2025-06-02 17:15:26.546346 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:15:26.574351 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:15:26.602293 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:15:26.678974 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:15:26.679433 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:15:26.680922 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:15:26.684672 | orchestrator | 2025-06-02 17:15:26.684797 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-06-02 17:15:26.686202 | orchestrator | Monday 02 June 2025 17:15:26 +0000 (0:00:00.233) 0:01:02.678 *********** 2025-06-02 17:15:27.732621 | orchestrator | ok: [testbed-manager] 2025-06-02 17:15:27.732719 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:15:27.735052 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:15:27.735609 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:15:27.735666 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:15:27.737502 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:15:27.738435 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:15:27.739498 | orchestrator | 2025-06-02 17:15:27.740675 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-06-02 17:15:27.741418 | orchestrator | Monday 02 June 2025 17:15:27 +0000 (0:00:01.051) 0:01:03.729 *********** 2025-06-02 17:15:29.212538 | orchestrator | changed: [testbed-manager] 2025-06-02 17:15:29.213757 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:15:29.217008 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:15:29.217902 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:15:29.219392 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:15:29.220508 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:15:29.221676 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:15:29.223313 | orchestrator | 2025-06-02 17:15:29.224769 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-06-02 17:15:29.225360 | orchestrator | Monday 02 June 2025 17:15:29 +0000 (0:00:01.480) 0:01:05.210 *********** 2025-06-02 17:15:31.423673 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:15:31.423805 | orchestrator | ok: [testbed-manager] 2025-06-02 17:15:31.424656 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:15:31.425563 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:15:31.428004 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:15:31.428705 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:15:31.429348 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:15:31.430205 | orchestrator | 2025-06-02 17:15:31.430904 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-06-02 17:15:31.431441 | orchestrator | Monday 02 June 2025 17:15:31 +0000 (0:00:02.208) 0:01:07.418 *********** 2025-06-02 17:16:12.213482 | orchestrator | ok: [testbed-manager] 2025-06-02 17:16:12.213602 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:16:12.214334 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:16:12.215868 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:16:12.216626 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:16:12.219114 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:16:12.220511 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:16:12.222347 | orchestrator | 2025-06-02 17:16:12.222791 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-06-02 17:16:12.224102 | orchestrator | Monday 02 June 2025 17:16:12 +0000 (0:00:40.791) 0:01:48.210 *********** 2025-06-02 17:17:25.753137 | orchestrator | changed: [testbed-manager] 2025-06-02 17:17:25.753323 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:17:25.753344 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:17:25.753367 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:17:25.754912 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:17:25.756774 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:17:25.757892 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:17:25.759182 | orchestrator | 2025-06-02 17:17:25.760340 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-06-02 17:17:25.761058 | orchestrator | Monday 02 June 2025 17:17:25 +0000 (0:01:13.537) 0:03:01.747 *********** 2025-06-02 17:17:27.364379 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:17:27.364756 | orchestrator | ok: [testbed-manager] 2025-06-02 17:17:27.366317 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:17:27.367407 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:17:27.368310 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:17:27.369056 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:17:27.370067 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:17:27.371152 | orchestrator | 2025-06-02 17:17:27.372291 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-06-02 17:17:27.373013 | orchestrator | Monday 02 June 2025 17:17:27 +0000 (0:00:01.613) 0:03:03.360 *********** 2025-06-02 17:17:40.608132 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:17:40.608441 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:17:40.609889 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:17:40.613157 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:17:40.615845 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:17:40.615895 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:17:40.616982 | orchestrator | changed: [testbed-manager] 2025-06-02 17:17:40.618095 | orchestrator | 2025-06-02 17:17:40.618927 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-06-02 17:17:40.620028 | orchestrator | Monday 02 June 2025 17:17:40 +0000 (0:00:13.239) 0:03:16.600 *********** 2025-06-02 17:17:41.011178 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-06-02 17:17:41.012898 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-06-02 17:17:41.016337 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-06-02 17:17:41.016412 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-06-02 17:17:41.016428 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-06-02 17:17:41.017149 | orchestrator | 2025-06-02 17:17:41.018072 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-06-02 17:17:41.018992 | orchestrator | Monday 02 June 2025 17:17:41 +0000 (0:00:00.411) 0:03:17.011 *********** 2025-06-02 17:17:41.075804 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-02 17:17:41.077017 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-02 17:17:41.104794 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:17:41.137305 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:17:41.137409 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-02 17:17:41.186428 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:17:41.187281 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-02 17:17:41.213865 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:17:41.719855 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-02 17:17:41.719984 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-02 17:17:41.720321 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-02 17:17:41.720664 | orchestrator | 2025-06-02 17:17:41.721481 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-06-02 17:17:41.721598 | orchestrator | Monday 02 June 2025 17:17:41 +0000 (0:00:00.706) 0:03:17.717 *********** 2025-06-02 17:17:41.779512 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-02 17:17:41.782611 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-02 17:17:41.782662 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-02 17:17:41.832797 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-02 17:17:41.832894 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-02 17:17:41.832969 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-02 17:17:41.832985 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-02 17:17:41.833916 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-02 17:17:41.834883 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-02 17:17:41.835210 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-02 17:17:41.836264 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-02 17:17:41.836310 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-02 17:17:41.836328 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-02 17:17:41.836432 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-02 17:17:41.836749 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-02 17:17:41.836907 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-02 17:17:41.838488 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-02 17:17:41.841166 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-02 17:17:41.844948 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-02 17:17:41.845037 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-02 17:17:41.845052 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-02 17:17:41.847470 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-02 17:17:41.847502 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-02 17:17:41.847933 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-02 17:17:41.892101 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-02 17:17:41.892252 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:17:41.892614 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-02 17:17:41.893743 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-02 17:17:41.894809 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-02 17:17:41.895669 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-02 17:17:41.896443 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-02 17:17:41.897056 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-02 17:17:41.897712 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-02 17:17:41.898927 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-02 17:17:41.899239 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-02 17:17:41.903337 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-02 17:17:41.943047 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:17:41.943330 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-02 17:17:41.943850 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-02 17:17:41.944209 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-02 17:17:41.944919 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-02 17:17:41.945583 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-02 17:17:41.974892 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:17:45.620117 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:17:45.620313 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-02 17:17:45.621449 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-02 17:17:45.621543 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-02 17:17:45.621621 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-02 17:17:45.624208 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-02 17:17:45.624323 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-02 17:17:45.624388 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-02 17:17:45.624406 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-02 17:17:45.624499 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-02 17:17:45.625605 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-02 17:17:45.626297 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-02 17:17:45.626764 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-02 17:17:45.626971 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-02 17:17:45.627759 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-02 17:17:45.628745 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-02 17:17:45.629442 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-02 17:17:45.630082 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-02 17:17:45.630969 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-02 17:17:45.632463 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-02 17:17:45.635027 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-02 17:17:45.635081 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-02 17:17:45.635092 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-02 17:17:45.636241 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-02 17:17:45.637920 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-02 17:17:45.637960 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-02 17:17:45.637972 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-02 17:17:45.640529 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-02 17:17:45.640979 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-02 17:17:45.641542 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-02 17:17:45.642588 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-02 17:17:45.643083 | orchestrator | 2025-06-02 17:17:45.643673 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-06-02 17:17:45.647102 | orchestrator | Monday 02 June 2025 17:17:45 +0000 (0:00:03.898) 0:03:21.616 *********** 2025-06-02 17:17:47.256706 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-02 17:17:47.256788 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-02 17:17:47.257839 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-02 17:17:47.258889 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-02 17:17:47.259841 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-02 17:17:47.260664 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-02 17:17:47.261206 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-02 17:17:47.261948 | orchestrator | 2025-06-02 17:17:47.262636 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-06-02 17:17:47.263006 | orchestrator | Monday 02 June 2025 17:17:47 +0000 (0:00:01.637) 0:03:23.254 *********** 2025-06-02 17:17:47.315276 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-02 17:17:47.345092 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:17:47.419362 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-02 17:17:47.798998 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:17:47.799883 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-02 17:17:47.800691 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:17:47.801079 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-02 17:17:47.801783 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:17:47.802115 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-02 17:17:47.803004 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-02 17:17:47.803314 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-02 17:17:47.804752 | orchestrator | 2025-06-02 17:17:47.804927 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-06-02 17:17:47.805421 | orchestrator | Monday 02 June 2025 17:17:47 +0000 (0:00:00.540) 0:03:23.795 *********** 2025-06-02 17:17:47.864407 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-02 17:17:47.895938 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:17:47.979166 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-02 17:17:48.374353 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:17:48.374807 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-02 17:17:48.375378 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:17:48.376260 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-02 17:17:48.376862 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:17:48.377627 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-02 17:17:48.378464 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-02 17:17:48.379267 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-02 17:17:48.380011 | orchestrator | 2025-06-02 17:17:48.380548 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-06-02 17:17:48.381491 | orchestrator | Monday 02 June 2025 17:17:48 +0000 (0:00:00.577) 0:03:24.372 *********** 2025-06-02 17:17:48.479923 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:17:48.523165 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:17:48.563189 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:17:48.601926 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:17:48.787550 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:17:48.787675 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:17:48.789407 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:17:48.792566 | orchestrator | 2025-06-02 17:17:48.793600 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-06-02 17:17:48.795463 | orchestrator | Monday 02 June 2025 17:17:48 +0000 (0:00:00.410) 0:03:24.783 *********** 2025-06-02 17:17:54.545453 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:17:54.545619 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:17:54.546759 | orchestrator | ok: [testbed-manager] 2025-06-02 17:17:54.549881 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:17:54.549912 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:17:54.549925 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:17:54.550494 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:17:54.551146 | orchestrator | 2025-06-02 17:17:54.551833 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-06-02 17:17:54.552459 | orchestrator | Monday 02 June 2025 17:17:54 +0000 (0:00:05.760) 0:03:30.543 *********** 2025-06-02 17:17:54.621193 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-06-02 17:17:54.668474 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:17:54.668680 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-06-02 17:17:54.669083 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-06-02 17:17:54.707416 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:17:54.758781 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-06-02 17:17:54.758961 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:17:54.760295 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-06-02 17:17:54.793385 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:17:54.860107 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:17:54.860596 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-06-02 17:17:54.861568 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:17:54.862141 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-06-02 17:17:54.863023 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:17:54.864743 | orchestrator | 2025-06-02 17:17:54.864847 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-06-02 17:17:54.864865 | orchestrator | Monday 02 June 2025 17:17:54 +0000 (0:00:00.316) 0:03:30.860 *********** 2025-06-02 17:17:56.001928 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-06-02 17:17:56.003651 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-06-02 17:17:56.004917 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-06-02 17:17:56.006125 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-06-02 17:17:56.007427 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-06-02 17:17:56.008140 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-06-02 17:17:56.008823 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-06-02 17:17:56.010616 | orchestrator | 2025-06-02 17:17:56.010815 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-06-02 17:17:56.013425 | orchestrator | Monday 02 June 2025 17:17:55 +0000 (0:00:01.138) 0:03:31.998 *********** 2025-06-02 17:17:56.580108 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:17:56.582583 | orchestrator | 2025-06-02 17:17:56.582644 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-06-02 17:17:56.583060 | orchestrator | Monday 02 June 2025 17:17:56 +0000 (0:00:00.578) 0:03:32.577 *********** 2025-06-02 17:17:57.854106 | orchestrator | ok: [testbed-manager] 2025-06-02 17:17:57.855025 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:17:57.855537 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:17:57.856496 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:17:57.857036 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:17:57.859716 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:17:57.860037 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:17:57.860832 | orchestrator | 2025-06-02 17:17:57.861627 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-06-02 17:17:57.863049 | orchestrator | Monday 02 June 2025 17:17:57 +0000 (0:00:01.274) 0:03:33.851 *********** 2025-06-02 17:17:58.545599 | orchestrator | ok: [testbed-manager] 2025-06-02 17:17:58.545939 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:17:58.547482 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:17:58.549787 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:17:58.550094 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:17:58.551842 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:17:58.552955 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:17:58.554353 | orchestrator | 2025-06-02 17:17:58.555773 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-06-02 17:17:58.556262 | orchestrator | Monday 02 June 2025 17:17:58 +0000 (0:00:00.687) 0:03:34.539 *********** 2025-06-02 17:17:59.150591 | orchestrator | changed: [testbed-manager] 2025-06-02 17:17:59.151949 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:17:59.152065 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:17:59.153814 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:17:59.154555 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:17:59.155707 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:17:59.155914 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:17:59.157015 | orchestrator | 2025-06-02 17:17:59.157964 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-06-02 17:17:59.159075 | orchestrator | Monday 02 June 2025 17:17:59 +0000 (0:00:00.610) 0:03:35.150 *********** 2025-06-02 17:17:59.834855 | orchestrator | ok: [testbed-manager] 2025-06-02 17:17:59.838673 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:17:59.841627 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:17:59.842850 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:17:59.844304 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:17:59.845364 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:17:59.846199 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:17:59.848308 | orchestrator | 2025-06-02 17:17:59.848760 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-06-02 17:17:59.849575 | orchestrator | Monday 02 June 2025 17:17:59 +0000 (0:00:00.680) 0:03:35.831 *********** 2025-06-02 17:18:00.794738 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748883321.267961, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:18:00.794852 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748883377.0340393, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:18:00.794869 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748883386.1089885, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:18:00.795551 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748883394.3457727, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:18:00.796887 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748883385.7776504, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:18:00.797319 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748883379.3476522, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:18:00.798844 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748883395.4944263, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:18:00.801635 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748883345.332082, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:18:00.801930 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748883275.8983073, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:18:00.803119 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748883274.7636433, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:18:00.804069 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748883287.2689679, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:18:00.804821 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748883283.4408066, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:18:00.805665 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748883282.0019898, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:18:00.806260 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748883290.4979978, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:18:00.807020 | orchestrator | 2025-06-02 17:18:00.807505 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-06-02 17:18:00.808208 | orchestrator | Monday 02 June 2025 17:18:00 +0000 (0:00:00.958) 0:03:36.789 *********** 2025-06-02 17:18:01.980029 | orchestrator | changed: [testbed-manager] 2025-06-02 17:18:01.980802 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:18:01.983063 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:18:01.984376 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:18:01.984773 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:18:01.985524 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:18:01.986568 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:18:01.987611 | orchestrator | 2025-06-02 17:18:01.988672 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-06-02 17:18:01.989586 | orchestrator | Monday 02 June 2025 17:18:01 +0000 (0:00:01.187) 0:03:37.977 *********** 2025-06-02 17:18:03.124139 | orchestrator | changed: [testbed-manager] 2025-06-02 17:18:03.124657 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:18:03.125048 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:18:03.126108 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:18:03.126570 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:18:03.127692 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:18:03.128408 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:18:03.128685 | orchestrator | 2025-06-02 17:18:03.129511 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-06-02 17:18:03.130122 | orchestrator | Monday 02 June 2025 17:18:03 +0000 (0:00:01.144) 0:03:39.122 *********** 2025-06-02 17:18:04.346544 | orchestrator | changed: [testbed-manager] 2025-06-02 17:18:04.346653 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:18:04.346669 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:18:04.346680 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:18:04.346759 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:18:04.346775 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:18:04.347260 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:18:04.348644 | orchestrator | 2025-06-02 17:18:04.348669 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-06-02 17:18:04.348682 | orchestrator | Monday 02 June 2025 17:18:04 +0000 (0:00:01.222) 0:03:40.345 *********** 2025-06-02 17:18:04.419522 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:18:04.454931 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:18:04.510851 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:18:04.550705 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:18:04.590533 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:18:04.663165 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:18:04.663293 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:18:04.663966 | orchestrator | 2025-06-02 17:18:04.664618 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-06-02 17:18:04.665645 | orchestrator | Monday 02 June 2025 17:18:04 +0000 (0:00:00.316) 0:03:40.662 *********** 2025-06-02 17:18:05.388767 | orchestrator | ok: [testbed-manager] 2025-06-02 17:18:05.389192 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:18:05.390086 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:18:05.391652 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:18:05.392663 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:18:05.392921 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:18:05.393479 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:18:05.394871 | orchestrator | 2025-06-02 17:18:05.394908 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-06-02 17:18:05.395377 | orchestrator | Monday 02 June 2025 17:18:05 +0000 (0:00:00.723) 0:03:41.385 *********** 2025-06-02 17:18:05.857303 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:18:05.857629 | orchestrator | 2025-06-02 17:18:05.858309 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-06-02 17:18:05.858748 | orchestrator | Monday 02 June 2025 17:18:05 +0000 (0:00:00.468) 0:03:41.854 *********** 2025-06-02 17:18:13.138667 | orchestrator | ok: [testbed-manager] 2025-06-02 17:18:13.138884 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:18:13.140273 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:18:13.141975 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:18:13.142929 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:18:13.145134 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:18:13.145971 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:18:13.147131 | orchestrator | 2025-06-02 17:18:13.147974 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-06-02 17:18:13.148716 | orchestrator | Monday 02 June 2025 17:18:13 +0000 (0:00:07.280) 0:03:49.134 *********** 2025-06-02 17:18:14.349638 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:18:14.350127 | orchestrator | ok: [testbed-manager] 2025-06-02 17:18:14.351422 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:18:14.352654 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:18:14.355673 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:18:14.356644 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:18:14.357986 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:18:14.358742 | orchestrator | 2025-06-02 17:18:14.360306 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-06-02 17:18:14.360397 | orchestrator | Monday 02 June 2025 17:18:14 +0000 (0:00:01.213) 0:03:50.348 *********** 2025-06-02 17:18:15.405454 | orchestrator | ok: [testbed-manager] 2025-06-02 17:18:15.406637 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:18:15.408033 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:18:15.409711 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:18:15.410800 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:18:15.412016 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:18:15.412616 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:18:15.413590 | orchestrator | 2025-06-02 17:18:15.414606 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-06-02 17:18:15.415424 | orchestrator | Monday 02 June 2025 17:18:15 +0000 (0:00:01.054) 0:03:51.402 *********** 2025-06-02 17:18:15.944859 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:18:15.944973 | orchestrator | 2025-06-02 17:18:15.944989 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-06-02 17:18:15.946201 | orchestrator | Monday 02 June 2025 17:18:15 +0000 (0:00:00.541) 0:03:51.943 *********** 2025-06-02 17:18:24.449019 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:18:24.449263 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:18:24.450898 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:18:24.452535 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:18:24.453490 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:18:24.455437 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:18:24.456102 | orchestrator | changed: [testbed-manager] 2025-06-02 17:18:24.457583 | orchestrator | 2025-06-02 17:18:24.458301 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-06-02 17:18:24.459420 | orchestrator | Monday 02 June 2025 17:18:24 +0000 (0:00:08.503) 0:04:00.447 *********** 2025-06-02 17:18:25.087649 | orchestrator | changed: [testbed-manager] 2025-06-02 17:18:25.088073 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:18:25.089440 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:18:25.090499 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:18:25.091601 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:18:25.092562 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:18:25.094081 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:18:25.094872 | orchestrator | 2025-06-02 17:18:25.095836 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-06-02 17:18:25.096799 | orchestrator | Monday 02 June 2025 17:18:25 +0000 (0:00:00.638) 0:04:01.085 *********** 2025-06-02 17:18:26.184666 | orchestrator | changed: [testbed-manager] 2025-06-02 17:18:26.185760 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:18:26.187259 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:18:26.188173 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:18:26.189333 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:18:26.190500 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:18:26.191380 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:18:26.192096 | orchestrator | 2025-06-02 17:18:26.192896 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-06-02 17:18:26.194749 | orchestrator | Monday 02 June 2025 17:18:26 +0000 (0:00:01.097) 0:04:02.182 *********** 2025-06-02 17:18:27.245785 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:18:27.245952 | orchestrator | changed: [testbed-manager] 2025-06-02 17:18:27.249582 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:18:27.250452 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:18:27.251800 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:18:27.252365 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:18:27.253369 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:18:27.253686 | orchestrator | 2025-06-02 17:18:27.254530 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-06-02 17:18:27.254864 | orchestrator | Monday 02 June 2025 17:18:27 +0000 (0:00:01.061) 0:04:03.244 *********** 2025-06-02 17:18:27.358819 | orchestrator | ok: [testbed-manager] 2025-06-02 17:18:27.393164 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:18:27.428542 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:18:27.465171 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:18:27.538362 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:18:27.538481 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:18:27.539775 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:18:27.541776 | orchestrator | 2025-06-02 17:18:27.542701 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-06-02 17:18:27.543792 | orchestrator | Monday 02 June 2025 17:18:27 +0000 (0:00:00.293) 0:04:03.538 *********** 2025-06-02 17:18:27.675993 | orchestrator | ok: [testbed-manager] 2025-06-02 17:18:27.712515 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:18:27.760896 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:18:27.796928 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:18:27.863874 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:18:27.866091 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:18:27.866867 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:18:27.867654 | orchestrator | 2025-06-02 17:18:27.868437 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-06-02 17:18:27.869119 | orchestrator | Monday 02 June 2025 17:18:27 +0000 (0:00:00.325) 0:04:03.863 *********** 2025-06-02 17:18:27.974381 | orchestrator | ok: [testbed-manager] 2025-06-02 17:18:28.005505 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:18:28.043816 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:18:28.089193 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:18:28.180199 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:18:28.181390 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:18:28.182697 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:18:28.183502 | orchestrator | 2025-06-02 17:18:28.185622 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-06-02 17:18:28.185761 | orchestrator | Monday 02 June 2025 17:18:28 +0000 (0:00:00.316) 0:04:04.179 *********** 2025-06-02 17:18:33.761345 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:18:33.761533 | orchestrator | ok: [testbed-manager] 2025-06-02 17:18:33.762315 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:18:33.763426 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:18:33.764885 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:18:33.765499 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:18:33.766573 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:18:33.767683 | orchestrator | 2025-06-02 17:18:33.768811 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-06-02 17:18:33.769564 | orchestrator | Monday 02 June 2025 17:18:33 +0000 (0:00:05.580) 0:04:09.760 *********** 2025-06-02 17:18:34.178115 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:18:34.178399 | orchestrator | 2025-06-02 17:18:34.179368 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-06-02 17:18:34.180762 | orchestrator | Monday 02 June 2025 17:18:34 +0000 (0:00:00.416) 0:04:10.176 *********** 2025-06-02 17:18:34.275745 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-06-02 17:18:34.275927 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-06-02 17:18:34.276345 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-06-02 17:18:34.276732 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-06-02 17:18:34.318918 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:18:34.319078 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-06-02 17:18:34.387406 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:18:34.388086 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-06-02 17:18:34.389052 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-06-02 17:18:34.389095 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-06-02 17:18:34.423570 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:18:34.467604 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:18:34.467882 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-06-02 17:18:34.468504 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-06-02 17:18:34.542889 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:18:34.543087 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-06-02 17:18:34.543674 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-06-02 17:18:34.544856 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:18:34.544913 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-06-02 17:18:34.545762 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-06-02 17:18:34.546612 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:18:34.547037 | orchestrator | 2025-06-02 17:18:34.547648 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-06-02 17:18:34.548694 | orchestrator | Monday 02 June 2025 17:18:34 +0000 (0:00:00.365) 0:04:10.542 *********** 2025-06-02 17:18:34.981492 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:18:34.984461 | orchestrator | 2025-06-02 17:18:34.985968 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-06-02 17:18:34.988165 | orchestrator | Monday 02 June 2025 17:18:34 +0000 (0:00:00.435) 0:04:10.977 *********** 2025-06-02 17:18:35.058455 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-06-02 17:18:35.061780 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-06-02 17:18:35.098657 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:18:35.098765 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-06-02 17:18:35.141365 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:18:35.185677 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-06-02 17:18:35.187051 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:18:35.187472 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-06-02 17:18:35.218155 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:18:35.313521 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-06-02 17:18:35.314675 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:18:35.315249 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:18:35.315445 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-06-02 17:18:35.315868 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:18:35.316251 | orchestrator | 2025-06-02 17:18:35.316899 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-06-02 17:18:35.317676 | orchestrator | Monday 02 June 2025 17:18:35 +0000 (0:00:00.333) 0:04:11.311 *********** 2025-06-02 17:18:35.877679 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:18:35.878674 | orchestrator | 2025-06-02 17:18:35.879597 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-06-02 17:18:35.880425 | orchestrator | Monday 02 June 2025 17:18:35 +0000 (0:00:00.564) 0:04:11.875 *********** 2025-06-02 17:19:08.977276 | orchestrator | changed: [testbed-manager] 2025-06-02 17:19:08.977400 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:19:08.977675 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:19:08.978491 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:19:08.979491 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:19:08.980446 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:19:08.982082 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:19:08.982918 | orchestrator | 2025-06-02 17:19:08.983549 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-06-02 17:19:08.984388 | orchestrator | Monday 02 June 2025 17:19:08 +0000 (0:00:33.097) 0:04:44.973 *********** 2025-06-02 17:19:16.424190 | orchestrator | changed: [testbed-manager] 2025-06-02 17:19:16.424361 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:19:16.424377 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:19:16.425622 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:19:16.425656 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:19:16.426629 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:19:16.426661 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:19:16.426680 | orchestrator | 2025-06-02 17:19:16.427628 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-06-02 17:19:16.429068 | orchestrator | Monday 02 June 2025 17:19:16 +0000 (0:00:07.447) 0:04:52.421 *********** 2025-06-02 17:19:23.347556 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:19:23.349915 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:19:23.349993 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:19:23.350935 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:19:23.352328 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:19:23.353009 | orchestrator | changed: [testbed-manager] 2025-06-02 17:19:23.354297 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:19:23.355135 | orchestrator | 2025-06-02 17:19:23.356351 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-06-02 17:19:23.357165 | orchestrator | Monday 02 June 2025 17:19:23 +0000 (0:00:06.924) 0:04:59.345 *********** 2025-06-02 17:19:24.982130 | orchestrator | ok: [testbed-manager] 2025-06-02 17:19:24.982594 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:19:24.983460 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:19:24.984525 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:19:24.986136 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:19:24.986731 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:19:24.987465 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:19:24.988670 | orchestrator | 2025-06-02 17:19:24.989426 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-06-02 17:19:24.989923 | orchestrator | Monday 02 June 2025 17:19:24 +0000 (0:00:01.633) 0:05:00.978 *********** 2025-06-02 17:19:30.526607 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:19:30.527623 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:19:30.528860 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:19:30.529662 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:19:30.530338 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:19:30.531285 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:19:30.531831 | orchestrator | changed: [testbed-manager] 2025-06-02 17:19:30.532368 | orchestrator | 2025-06-02 17:19:30.532908 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-06-02 17:19:30.533367 | orchestrator | Monday 02 June 2025 17:19:30 +0000 (0:00:05.543) 0:05:06.522 *********** 2025-06-02 17:19:30.991513 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:19:30.991809 | orchestrator | 2025-06-02 17:19:30.992464 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-06-02 17:19:30.993390 | orchestrator | Monday 02 June 2025 17:19:30 +0000 (0:00:00.465) 0:05:06.988 *********** 2025-06-02 17:19:31.760565 | orchestrator | changed: [testbed-manager] 2025-06-02 17:19:31.761310 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:19:31.761977 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:19:31.763130 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:19:31.764522 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:19:31.766788 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:19:31.767747 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:19:31.768500 | orchestrator | 2025-06-02 17:19:31.769125 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-06-02 17:19:31.770319 | orchestrator | Monday 02 June 2025 17:19:31 +0000 (0:00:00.769) 0:05:07.757 *********** 2025-06-02 17:19:33.274632 | orchestrator | ok: [testbed-manager] 2025-06-02 17:19:33.274874 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:19:33.275849 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:19:33.277609 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:19:33.277861 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:19:33.279007 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:19:33.279719 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:19:33.280437 | orchestrator | 2025-06-02 17:19:33.281184 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-06-02 17:19:33.281787 | orchestrator | Monday 02 June 2025 17:19:33 +0000 (0:00:01.514) 0:05:09.272 *********** 2025-06-02 17:19:34.036372 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:19:34.036480 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:19:34.036827 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:19:34.037526 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:19:34.038162 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:19:34.038638 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:19:34.039135 | orchestrator | changed: [testbed-manager] 2025-06-02 17:19:34.039692 | orchestrator | 2025-06-02 17:19:34.040247 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-06-02 17:19:34.040730 | orchestrator | Monday 02 June 2025 17:19:34 +0000 (0:00:00.762) 0:05:10.035 *********** 2025-06-02 17:19:34.110304 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:19:34.145776 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:19:34.199448 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:19:34.235606 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:19:34.270072 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:19:34.333356 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:19:34.335484 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:19:34.335528 | orchestrator | 2025-06-02 17:19:34.336396 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-06-02 17:19:34.337733 | orchestrator | Monday 02 June 2025 17:19:34 +0000 (0:00:00.296) 0:05:10.331 *********** 2025-06-02 17:19:34.448139 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:19:34.488438 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:19:34.526601 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:19:34.563417 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:19:34.754902 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:19:34.755098 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:19:34.756614 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:19:34.757378 | orchestrator | 2025-06-02 17:19:34.758458 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-06-02 17:19:34.759604 | orchestrator | Monday 02 June 2025 17:19:34 +0000 (0:00:00.420) 0:05:10.752 *********** 2025-06-02 17:19:34.882479 | orchestrator | ok: [testbed-manager] 2025-06-02 17:19:34.917228 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:19:34.953796 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:19:34.991374 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:19:35.076088 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:19:35.077548 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:19:35.078436 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:19:35.079808 | orchestrator | 2025-06-02 17:19:35.080480 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-06-02 17:19:35.081198 | orchestrator | Monday 02 June 2025 17:19:35 +0000 (0:00:00.322) 0:05:11.075 *********** 2025-06-02 17:19:35.155096 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:19:35.193257 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:19:35.231947 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:19:35.270191 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:19:35.309027 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:19:35.387052 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:19:35.387321 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:19:35.388082 | orchestrator | 2025-06-02 17:19:35.388881 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-06-02 17:19:35.389639 | orchestrator | Monday 02 June 2025 17:19:35 +0000 (0:00:00.311) 0:05:11.386 *********** 2025-06-02 17:19:35.495462 | orchestrator | ok: [testbed-manager] 2025-06-02 17:19:35.535503 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:19:35.597430 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:19:35.636661 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:19:35.710828 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:19:35.712746 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:19:35.714254 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:19:35.715973 | orchestrator | 2025-06-02 17:19:35.717446 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-06-02 17:19:35.718425 | orchestrator | Monday 02 June 2025 17:19:35 +0000 (0:00:00.321) 0:05:11.708 *********** 2025-06-02 17:19:35.835847 | orchestrator | ok: [testbed-manager] => { 2025-06-02 17:19:35.837058 | orchestrator |  "docker_version": "5:27.5.1" 2025-06-02 17:19:35.838769 | orchestrator | } 2025-06-02 17:19:35.869000 | orchestrator | ok: [testbed-node-3] => { 2025-06-02 17:19:35.871258 | orchestrator |  "docker_version": "5:27.5.1" 2025-06-02 17:19:35.871810 | orchestrator | } 2025-06-02 17:19:35.901775 | orchestrator | ok: [testbed-node-4] => { 2025-06-02 17:19:35.902746 | orchestrator |  "docker_version": "5:27.5.1" 2025-06-02 17:19:35.903019 | orchestrator | } 2025-06-02 17:19:35.940889 | orchestrator | ok: [testbed-node-5] => { 2025-06-02 17:19:35.942485 | orchestrator |  "docker_version": "5:27.5.1" 2025-06-02 17:19:35.943653 | orchestrator | } 2025-06-02 17:19:36.033195 | orchestrator | ok: [testbed-node-0] => { 2025-06-02 17:19:36.035090 | orchestrator |  "docker_version": "5:27.5.1" 2025-06-02 17:19:36.036409 | orchestrator | } 2025-06-02 17:19:36.037911 | orchestrator | ok: [testbed-node-1] => { 2025-06-02 17:19:36.040834 | orchestrator |  "docker_version": "5:27.5.1" 2025-06-02 17:19:36.040883 | orchestrator | } 2025-06-02 17:19:36.040896 | orchestrator | ok: [testbed-node-2] => { 2025-06-02 17:19:36.041000 | orchestrator |  "docker_version": "5:27.5.1" 2025-06-02 17:19:36.043797 | orchestrator | } 2025-06-02 17:19:36.043839 | orchestrator | 2025-06-02 17:19:36.043852 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-06-02 17:19:36.044900 | orchestrator | Monday 02 June 2025 17:19:36 +0000 (0:00:00.323) 0:05:12.031 *********** 2025-06-02 17:19:36.168428 | orchestrator | ok: [testbed-manager] => { 2025-06-02 17:19:36.169001 | orchestrator |  "docker_cli_version": "5:27.5.1" 2025-06-02 17:19:36.169465 | orchestrator | } 2025-06-02 17:19:36.332073 | orchestrator | ok: [testbed-node-3] => { 2025-06-02 17:19:36.332814 | orchestrator |  "docker_cli_version": "5:27.5.1" 2025-06-02 17:19:36.335641 | orchestrator | } 2025-06-02 17:19:36.373111 | orchestrator | ok: [testbed-node-4] => { 2025-06-02 17:19:36.373710 | orchestrator |  "docker_cli_version": "5:27.5.1" 2025-06-02 17:19:36.374713 | orchestrator | } 2025-06-02 17:19:36.416150 | orchestrator | ok: [testbed-node-5] => { 2025-06-02 17:19:36.416296 | orchestrator |  "docker_cli_version": "5:27.5.1" 2025-06-02 17:19:36.417417 | orchestrator | } 2025-06-02 17:19:36.485357 | orchestrator | ok: [testbed-node-0] => { 2025-06-02 17:19:36.486435 | orchestrator |  "docker_cli_version": "5:27.5.1" 2025-06-02 17:19:36.488881 | orchestrator | } 2025-06-02 17:19:36.488936 | orchestrator | ok: [testbed-node-1] => { 2025-06-02 17:19:36.489000 | orchestrator |  "docker_cli_version": "5:27.5.1" 2025-06-02 17:19:36.490837 | orchestrator | } 2025-06-02 17:19:36.490971 | orchestrator | ok: [testbed-node-2] => { 2025-06-02 17:19:36.490991 | orchestrator |  "docker_cli_version": "5:27.5.1" 2025-06-02 17:19:36.492438 | orchestrator | } 2025-06-02 17:19:36.492471 | orchestrator | 2025-06-02 17:19:36.492936 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-06-02 17:19:36.493729 | orchestrator | Monday 02 June 2025 17:19:36 +0000 (0:00:00.451) 0:05:12.483 *********** 2025-06-02 17:19:36.566609 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:19:36.600317 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:19:36.638671 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:19:36.674110 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:19:36.717640 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:19:36.774749 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:19:36.775515 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:19:36.776290 | orchestrator | 2025-06-02 17:19:36.777478 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-06-02 17:19:36.778922 | orchestrator | Monday 02 June 2025 17:19:36 +0000 (0:00:00.291) 0:05:12.774 *********** 2025-06-02 17:19:36.860462 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:19:36.896484 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:19:36.934348 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:19:36.969802 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:19:37.006659 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:19:37.071779 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:19:37.074523 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:19:37.074710 | orchestrator | 2025-06-02 17:19:37.076004 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-06-02 17:19:37.077384 | orchestrator | Monday 02 June 2025 17:19:37 +0000 (0:00:00.295) 0:05:13.070 *********** 2025-06-02 17:19:37.522814 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:19:37.523710 | orchestrator | 2025-06-02 17:19:37.524918 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-06-02 17:19:37.525518 | orchestrator | Monday 02 June 2025 17:19:37 +0000 (0:00:00.451) 0:05:13.521 *********** 2025-06-02 17:19:38.386322 | orchestrator | ok: [testbed-manager] 2025-06-02 17:19:38.386554 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:19:38.388922 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:19:38.389536 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:19:38.390271 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:19:38.394348 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:19:38.395414 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:19:38.399248 | orchestrator | 2025-06-02 17:19:38.400503 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-06-02 17:19:38.402528 | orchestrator | Monday 02 June 2025 17:19:38 +0000 (0:00:00.860) 0:05:14.382 *********** 2025-06-02 17:19:41.264691 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:19:41.268646 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:19:41.270280 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:19:41.271754 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:19:41.272868 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:19:41.273891 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:19:41.274715 | orchestrator | ok: [testbed-manager] 2025-06-02 17:19:41.275481 | orchestrator | 2025-06-02 17:19:41.276129 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-06-02 17:19:41.276879 | orchestrator | Monday 02 June 2025 17:19:41 +0000 (0:00:02.880) 0:05:17.263 *********** 2025-06-02 17:19:41.347253 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-06-02 17:19:41.351876 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-06-02 17:19:41.422081 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-06-02 17:19:41.422724 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-06-02 17:19:41.423615 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-06-02 17:19:41.502901 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:19:41.507500 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-06-02 17:19:41.507544 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-06-02 17:19:41.507556 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-06-02 17:19:41.507714 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-06-02 17:19:41.718752 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:19:41.718859 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-06-02 17:19:41.718962 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-06-02 17:19:41.719702 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-06-02 17:19:41.794319 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:19:41.794494 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-06-02 17:19:41.798411 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-06-02 17:19:41.798449 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-06-02 17:19:41.888093 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:19:41.889046 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-06-02 17:19:41.890638 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-06-02 17:19:41.891993 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-06-02 17:19:42.065864 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:19:42.068608 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:19:42.071554 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-06-02 17:19:42.071593 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-06-02 17:19:42.072072 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-06-02 17:19:42.073567 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:19:42.073628 | orchestrator | 2025-06-02 17:19:42.074580 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-06-02 17:19:42.074977 | orchestrator | Monday 02 June 2025 17:19:42 +0000 (0:00:00.800) 0:05:18.063 *********** 2025-06-02 17:19:48.162560 | orchestrator | ok: [testbed-manager] 2025-06-02 17:19:48.164492 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:19:48.164925 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:19:48.167321 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:19:48.168988 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:19:48.169511 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:19:48.170176 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:19:48.170614 | orchestrator | 2025-06-02 17:19:48.171177 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-06-02 17:19:48.171571 | orchestrator | Monday 02 June 2025 17:19:48 +0000 (0:00:06.094) 0:05:24.158 *********** 2025-06-02 17:19:49.216504 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:19:49.216626 | orchestrator | ok: [testbed-manager] 2025-06-02 17:19:49.216643 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:19:49.217302 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:19:49.217730 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:19:49.217754 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:19:49.221070 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:19:49.221913 | orchestrator | 2025-06-02 17:19:49.223147 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-06-02 17:19:49.224452 | orchestrator | Monday 02 June 2025 17:19:49 +0000 (0:00:01.055) 0:05:25.214 *********** 2025-06-02 17:19:57.271120 | orchestrator | ok: [testbed-manager] 2025-06-02 17:19:57.271619 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:19:57.272221 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:19:57.272457 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:19:57.274564 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:19:57.274603 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:19:57.274615 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:19:57.276086 | orchestrator | 2025-06-02 17:19:57.276310 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-06-02 17:19:57.276404 | orchestrator | Monday 02 June 2025 17:19:57 +0000 (0:00:08.056) 0:05:33.270 *********** 2025-06-02 17:20:00.517453 | orchestrator | changed: [testbed-manager] 2025-06-02 17:20:00.518987 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:20:00.519702 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:20:00.520776 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:20:00.521695 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:20:00.523424 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:20:00.523907 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:20:00.524864 | orchestrator | 2025-06-02 17:20:00.525321 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-06-02 17:20:00.526148 | orchestrator | Monday 02 June 2025 17:20:00 +0000 (0:00:03.242) 0:05:36.512 *********** 2025-06-02 17:20:02.079712 | orchestrator | ok: [testbed-manager] 2025-06-02 17:20:02.080075 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:20:02.082350 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:20:02.082379 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:20:02.082855 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:20:02.084244 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:20:02.084374 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:20:02.084767 | orchestrator | 2025-06-02 17:20:02.085503 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-06-02 17:20:02.085703 | orchestrator | Monday 02 June 2025 17:20:02 +0000 (0:00:01.563) 0:05:38.076 *********** 2025-06-02 17:20:03.420969 | orchestrator | ok: [testbed-manager] 2025-06-02 17:20:03.421091 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:20:03.421174 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:20:03.421463 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:20:03.422070 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:20:03.423119 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:20:03.423298 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:20:03.423319 | orchestrator | 2025-06-02 17:20:03.423599 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-06-02 17:20:03.423889 | orchestrator | Monday 02 June 2025 17:20:03 +0000 (0:00:01.341) 0:05:39.417 *********** 2025-06-02 17:20:03.643407 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:20:03.732484 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:20:03.796759 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:20:03.863966 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:20:04.029512 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:20:04.032491 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:20:04.034597 | orchestrator | changed: [testbed-manager] 2025-06-02 17:20:04.034660 | orchestrator | 2025-06-02 17:20:04.040471 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-06-02 17:20:04.040995 | orchestrator | Monday 02 June 2025 17:20:04 +0000 (0:00:00.611) 0:05:40.029 *********** 2025-06-02 17:20:13.497705 | orchestrator | ok: [testbed-manager] 2025-06-02 17:20:13.498429 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:20:13.498942 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:20:13.499667 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:20:13.501716 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:20:13.501749 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:20:13.502003 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:20:13.502470 | orchestrator | 2025-06-02 17:20:13.502933 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-06-02 17:20:13.503268 | orchestrator | Monday 02 June 2025 17:20:13 +0000 (0:00:09.465) 0:05:49.494 *********** 2025-06-02 17:20:14.440125 | orchestrator | changed: [testbed-manager] 2025-06-02 17:20:14.440904 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:20:14.441785 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:20:14.442873 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:20:14.444030 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:20:14.444238 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:20:14.444947 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:20:14.445328 | orchestrator | 2025-06-02 17:20:14.445861 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-06-02 17:20:14.446326 | orchestrator | Monday 02 June 2025 17:20:14 +0000 (0:00:00.943) 0:05:50.438 *********** 2025-06-02 17:20:22.945947 | orchestrator | ok: [testbed-manager] 2025-06-02 17:20:22.946441 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:20:22.948426 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:20:22.950628 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:20:22.952478 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:20:22.952970 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:20:22.953715 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:20:22.954344 | orchestrator | 2025-06-02 17:20:22.955349 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-06-02 17:20:22.956023 | orchestrator | Monday 02 June 2025 17:20:22 +0000 (0:00:08.505) 0:05:58.944 *********** 2025-06-02 17:20:33.352892 | orchestrator | ok: [testbed-manager] 2025-06-02 17:20:33.353006 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:20:33.353334 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:20:33.353634 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:20:33.354291 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:20:33.354979 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:20:33.355779 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:20:33.356219 | orchestrator | 2025-06-02 17:20:33.358277 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-06-02 17:20:33.359219 | orchestrator | Monday 02 June 2025 17:20:33 +0000 (0:00:10.402) 0:06:09.347 *********** 2025-06-02 17:20:33.773675 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-06-02 17:20:34.540657 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-06-02 17:20:34.541542 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-06-02 17:20:34.542908 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-06-02 17:20:34.543544 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-06-02 17:20:34.544703 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-06-02 17:20:34.544960 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-06-02 17:20:34.545546 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-06-02 17:20:34.546096 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-06-02 17:20:34.546553 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-06-02 17:20:34.547052 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-06-02 17:20:34.547821 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-06-02 17:20:34.548520 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-06-02 17:20:34.548873 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-06-02 17:20:34.549521 | orchestrator | 2025-06-02 17:20:34.549915 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-06-02 17:20:34.550524 | orchestrator | Monday 02 June 2025 17:20:34 +0000 (0:00:01.190) 0:06:10.537 *********** 2025-06-02 17:20:34.692387 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:20:34.759769 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:20:34.835594 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:20:34.903301 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:20:34.979335 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:20:35.107155 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:20:35.107652 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:20:35.108884 | orchestrator | 2025-06-02 17:20:35.109837 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-06-02 17:20:35.112130 | orchestrator | Monday 02 June 2025 17:20:35 +0000 (0:00:00.568) 0:06:11.106 *********** 2025-06-02 17:20:38.877888 | orchestrator | ok: [testbed-manager] 2025-06-02 17:20:38.878232 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:20:38.879418 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:20:38.880383 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:20:38.882800 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:20:38.884018 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:20:38.886129 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:20:38.887286 | orchestrator | 2025-06-02 17:20:38.888587 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-06-02 17:20:38.888997 | orchestrator | Monday 02 June 2025 17:20:38 +0000 (0:00:03.766) 0:06:14.873 *********** 2025-06-02 17:20:39.011914 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:20:39.079147 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:20:39.147022 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:20:39.220052 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:20:39.285735 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:20:39.389512 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:20:39.389948 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:20:39.391157 | orchestrator | 2025-06-02 17:20:39.392390 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-06-02 17:20:39.392658 | orchestrator | Monday 02 June 2025 17:20:39 +0000 (0:00:00.513) 0:06:15.387 *********** 2025-06-02 17:20:39.465445 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-06-02 17:20:39.466441 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-06-02 17:20:39.546872 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:20:39.547974 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-06-02 17:20:39.548974 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-06-02 17:20:39.618571 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:20:39.619384 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-06-02 17:20:39.620227 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-06-02 17:20:39.697038 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:20:39.697283 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-06-02 17:20:39.698380 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-06-02 17:20:39.767764 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:20:39.768661 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-06-02 17:20:39.771806 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-06-02 17:20:39.852613 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:20:39.853858 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-06-02 17:20:39.856758 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-06-02 17:20:39.995626 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:20:39.998621 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-06-02 17:20:39.998734 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-06-02 17:20:39.998750 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:20:39.999483 | orchestrator | 2025-06-02 17:20:40.000375 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-06-02 17:20:40.001050 | orchestrator | Monday 02 June 2025 17:20:39 +0000 (0:00:00.606) 0:06:15.993 *********** 2025-06-02 17:20:40.129279 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:20:40.203803 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:20:40.265105 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:20:40.338633 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:20:40.413737 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:20:40.556069 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:20:40.556704 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:20:40.557689 | orchestrator | 2025-06-02 17:20:40.558616 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-06-02 17:20:40.561402 | orchestrator | Monday 02 June 2025 17:20:40 +0000 (0:00:00.560) 0:06:16.553 *********** 2025-06-02 17:20:40.698878 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:20:40.764411 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:20:40.833022 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:20:40.903723 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:20:40.967539 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:20:41.066642 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:20:41.067721 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:20:41.069777 | orchestrator | 2025-06-02 17:20:41.070524 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-06-02 17:20:41.071783 | orchestrator | Monday 02 June 2025 17:20:41 +0000 (0:00:00.510) 0:06:17.064 *********** 2025-06-02 17:20:41.203081 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:20:41.269056 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:20:41.521001 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:20:41.589898 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:20:41.660627 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:20:41.790104 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:20:41.791240 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:20:41.793203 | orchestrator | 2025-06-02 17:20:41.795622 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-06-02 17:20:41.796584 | orchestrator | Monday 02 June 2025 17:20:41 +0000 (0:00:00.723) 0:06:17.787 *********** 2025-06-02 17:20:43.534905 | orchestrator | ok: [testbed-manager] 2025-06-02 17:20:43.537609 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:20:43.537843 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:20:43.540365 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:20:43.541650 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:20:43.542703 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:20:43.544290 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:20:43.545035 | orchestrator | 2025-06-02 17:20:43.546413 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-06-02 17:20:43.547795 | orchestrator | Monday 02 June 2025 17:20:43 +0000 (0:00:01.744) 0:06:19.531 *********** 2025-06-02 17:20:44.421948 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:20:44.423456 | orchestrator | 2025-06-02 17:20:44.424333 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-06-02 17:20:44.425067 | orchestrator | Monday 02 June 2025 17:20:44 +0000 (0:00:00.888) 0:06:20.420 *********** 2025-06-02 17:20:45.263799 | orchestrator | ok: [testbed-manager] 2025-06-02 17:20:45.263988 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:20:45.265154 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:20:45.266535 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:20:45.267628 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:20:45.268668 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:20:45.269421 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:20:45.270571 | orchestrator | 2025-06-02 17:20:45.270867 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-06-02 17:20:45.271834 | orchestrator | Monday 02 June 2025 17:20:45 +0000 (0:00:00.838) 0:06:21.259 *********** 2025-06-02 17:20:45.756194 | orchestrator | ok: [testbed-manager] 2025-06-02 17:20:45.828651 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:20:46.334846 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:20:46.335317 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:20:46.336057 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:20:46.336981 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:20:46.337667 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:20:46.339192 | orchestrator | 2025-06-02 17:20:46.339874 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-06-02 17:20:46.340054 | orchestrator | Monday 02 June 2025 17:20:46 +0000 (0:00:01.072) 0:06:22.332 *********** 2025-06-02 17:20:47.707309 | orchestrator | ok: [testbed-manager] 2025-06-02 17:20:47.707421 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:20:47.708045 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:20:47.709016 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:20:47.710639 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:20:47.712653 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:20:47.714608 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:20:47.714648 | orchestrator | 2025-06-02 17:20:47.715889 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-06-02 17:20:47.716113 | orchestrator | Monday 02 June 2025 17:20:47 +0000 (0:00:01.371) 0:06:23.703 *********** 2025-06-02 17:20:47.865071 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:20:49.118437 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:20:49.119998 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:20:49.121142 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:20:49.122979 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:20:49.123346 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:20:49.124657 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:20:49.125474 | orchestrator | 2025-06-02 17:20:49.126300 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-06-02 17:20:49.126966 | orchestrator | Monday 02 June 2025 17:20:49 +0000 (0:00:01.408) 0:06:25.112 *********** 2025-06-02 17:20:50.485841 | orchestrator | ok: [testbed-manager] 2025-06-02 17:20:50.486230 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:20:50.486566 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:20:50.487873 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:20:50.489478 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:20:50.491280 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:20:50.492119 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:20:50.493181 | orchestrator | 2025-06-02 17:20:50.493750 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-06-02 17:20:50.494130 | orchestrator | Monday 02 June 2025 17:20:50 +0000 (0:00:01.368) 0:06:26.480 *********** 2025-06-02 17:20:52.079828 | orchestrator | changed: [testbed-manager] 2025-06-02 17:20:52.080561 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:20:52.082246 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:20:52.083320 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:20:52.085760 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:20:52.087826 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:20:52.088772 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:20:52.089519 | orchestrator | 2025-06-02 17:20:52.090259 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-06-02 17:20:52.090803 | orchestrator | Monday 02 June 2025 17:20:52 +0000 (0:00:01.596) 0:06:28.077 *********** 2025-06-02 17:20:52.952526 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:20:52.953343 | orchestrator | 2025-06-02 17:20:52.954737 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-06-02 17:20:52.955485 | orchestrator | Monday 02 June 2025 17:20:52 +0000 (0:00:00.873) 0:06:28.950 *********** 2025-06-02 17:20:54.301000 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:20:54.301378 | orchestrator | ok: [testbed-manager] 2025-06-02 17:20:54.303299 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:20:54.304055 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:20:54.306372 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:20:54.307226 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:20:54.309548 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:20:54.311017 | orchestrator | 2025-06-02 17:20:54.312374 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-06-02 17:20:54.313316 | orchestrator | Monday 02 June 2025 17:20:54 +0000 (0:00:01.348) 0:06:30.298 *********** 2025-06-02 17:20:55.455076 | orchestrator | ok: [testbed-manager] 2025-06-02 17:20:55.456199 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:20:55.457311 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:20:55.458216 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:20:55.460129 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:20:55.461329 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:20:55.461788 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:20:55.462877 | orchestrator | 2025-06-02 17:20:55.463727 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-06-02 17:20:55.464403 | orchestrator | Monday 02 June 2025 17:20:55 +0000 (0:00:01.153) 0:06:31.451 *********** 2025-06-02 17:20:56.799503 | orchestrator | ok: [testbed-manager] 2025-06-02 17:20:56.799740 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:20:56.800912 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:20:56.801936 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:20:56.802574 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:20:56.803449 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:20:56.803672 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:20:56.804235 | orchestrator | 2025-06-02 17:20:56.804710 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-06-02 17:20:56.805324 | orchestrator | Monday 02 June 2025 17:20:56 +0000 (0:00:01.343) 0:06:32.795 *********** 2025-06-02 17:20:57.909426 | orchestrator | ok: [testbed-manager] 2025-06-02 17:20:57.909958 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:20:57.910525 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:20:57.911350 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:20:57.912472 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:20:57.913517 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:20:57.914489 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:20:57.915455 | orchestrator | 2025-06-02 17:20:57.916252 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-06-02 17:20:57.916912 | orchestrator | Monday 02 June 2025 17:20:57 +0000 (0:00:01.110) 0:06:33.906 *********** 2025-06-02 17:20:59.114754 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:20:59.117823 | orchestrator | 2025-06-02 17:20:59.117865 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-02 17:20:59.117879 | orchestrator | Monday 02 June 2025 17:20:58 +0000 (0:00:00.907) 0:06:34.814 *********** 2025-06-02 17:20:59.118526 | orchestrator | 2025-06-02 17:20:59.119339 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-02 17:20:59.120398 | orchestrator | Monday 02 June 2025 17:20:58 +0000 (0:00:00.040) 0:06:34.854 *********** 2025-06-02 17:20:59.121346 | orchestrator | 2025-06-02 17:20:59.122124 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-02 17:20:59.122815 | orchestrator | Monday 02 June 2025 17:20:58 +0000 (0:00:00.045) 0:06:34.900 *********** 2025-06-02 17:20:59.123461 | orchestrator | 2025-06-02 17:20:59.123937 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-02 17:20:59.124781 | orchestrator | Monday 02 June 2025 17:20:58 +0000 (0:00:00.038) 0:06:34.939 *********** 2025-06-02 17:20:59.125206 | orchestrator | 2025-06-02 17:20:59.125987 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-02 17:20:59.126614 | orchestrator | Monday 02 June 2025 17:20:58 +0000 (0:00:00.037) 0:06:34.977 *********** 2025-06-02 17:20:59.127262 | orchestrator | 2025-06-02 17:20:59.127687 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-02 17:20:59.128739 | orchestrator | Monday 02 June 2025 17:20:59 +0000 (0:00:00.045) 0:06:35.022 *********** 2025-06-02 17:20:59.128812 | orchestrator | 2025-06-02 17:20:59.129379 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-02 17:20:59.129951 | orchestrator | Monday 02 June 2025 17:20:59 +0000 (0:00:00.038) 0:06:35.061 *********** 2025-06-02 17:20:59.130628 | orchestrator | 2025-06-02 17:20:59.130989 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-02 17:20:59.131444 | orchestrator | Monday 02 June 2025 17:20:59 +0000 (0:00:00.048) 0:06:35.109 *********** 2025-06-02 17:21:00.388815 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:21:00.388934 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:21:00.391437 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:21:00.391869 | orchestrator | 2025-06-02 17:21:00.393420 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-06-02 17:21:00.394322 | orchestrator | Monday 02 June 2025 17:21:00 +0000 (0:00:01.272) 0:06:36.382 *********** 2025-06-02 17:21:01.699982 | orchestrator | changed: [testbed-manager] 2025-06-02 17:21:01.700396 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:21:01.700644 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:21:01.700782 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:21:01.701554 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:21:01.702226 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:21:01.703447 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:21:01.703741 | orchestrator | 2025-06-02 17:21:01.704062 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-06-02 17:21:01.704382 | orchestrator | Monday 02 June 2025 17:21:01 +0000 (0:00:01.315) 0:06:37.697 *********** 2025-06-02 17:21:02.823934 | orchestrator | changed: [testbed-manager] 2025-06-02 17:21:02.824860 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:21:02.826359 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:21:02.827539 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:21:02.828488 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:21:02.829714 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:21:02.830616 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:21:02.830844 | orchestrator | 2025-06-02 17:21:02.831768 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-06-02 17:21:02.832521 | orchestrator | Monday 02 June 2025 17:21:02 +0000 (0:00:01.121) 0:06:38.819 *********** 2025-06-02 17:21:02.968741 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:21:05.382803 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:21:05.382932 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:21:05.382945 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:21:05.383002 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:21:05.383312 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:21:05.384007 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:21:05.385213 | orchestrator | 2025-06-02 17:21:05.385833 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-06-02 17:21:05.386850 | orchestrator | Monday 02 June 2025 17:21:05 +0000 (0:00:02.557) 0:06:41.377 *********** 2025-06-02 17:21:05.490629 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:21:05.490747 | orchestrator | 2025-06-02 17:21:05.490769 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-06-02 17:21:05.490790 | orchestrator | Monday 02 June 2025 17:21:05 +0000 (0:00:00.108) 0:06:41.485 *********** 2025-06-02 17:21:06.577000 | orchestrator | ok: [testbed-manager] 2025-06-02 17:21:06.578464 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:21:06.581482 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:21:06.581537 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:21:06.581550 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:21:06.581562 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:21:06.581614 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:21:06.581852 | orchestrator | 2025-06-02 17:21:06.582443 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-06-02 17:21:06.582904 | orchestrator | Monday 02 June 2025 17:21:06 +0000 (0:00:01.088) 0:06:42.574 *********** 2025-06-02 17:21:06.947365 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:21:07.015845 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:21:07.090833 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:21:07.176731 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:21:07.244872 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:21:07.381341 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:21:07.381432 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:21:07.381706 | orchestrator | 2025-06-02 17:21:07.381727 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-06-02 17:21:07.382455 | orchestrator | Monday 02 June 2025 17:21:07 +0000 (0:00:00.807) 0:06:43.381 *********** 2025-06-02 17:21:08.310448 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:21:08.313814 | orchestrator | 2025-06-02 17:21:08.313873 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-06-02 17:21:08.313888 | orchestrator | Monday 02 June 2025 17:21:08 +0000 (0:00:00.924) 0:06:44.306 *********** 2025-06-02 17:21:08.744990 | orchestrator | ok: [testbed-manager] 2025-06-02 17:21:09.192063 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:21:09.192604 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:21:09.193357 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:21:09.194277 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:21:09.194661 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:21:09.195269 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:21:09.195760 | orchestrator | 2025-06-02 17:21:09.196281 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-06-02 17:21:09.196720 | orchestrator | Monday 02 June 2025 17:21:09 +0000 (0:00:00.885) 0:06:45.191 *********** 2025-06-02 17:21:10.051911 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-06-02 17:21:12.154997 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-06-02 17:21:12.155107 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-06-02 17:21:12.155210 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-06-02 17:21:12.155630 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-06-02 17:21:12.155829 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-06-02 17:21:12.156601 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-06-02 17:21:12.157367 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-06-02 17:21:12.157740 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-06-02 17:21:12.160947 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-06-02 17:21:12.161935 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-06-02 17:21:12.162687 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-06-02 17:21:12.163567 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-06-02 17:21:12.164064 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-06-02 17:21:12.165191 | orchestrator | 2025-06-02 17:21:12.166168 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-06-02 17:21:12.166891 | orchestrator | Monday 02 June 2025 17:21:12 +0000 (0:00:02.959) 0:06:48.151 *********** 2025-06-02 17:21:12.319415 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:21:12.393577 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:21:12.469830 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:21:12.538377 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:21:12.608820 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:21:12.712884 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:21:12.713192 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:21:12.714166 | orchestrator | 2025-06-02 17:21:12.714921 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-06-02 17:21:12.715800 | orchestrator | Monday 02 June 2025 17:21:12 +0000 (0:00:00.561) 0:06:48.713 *********** 2025-06-02 17:21:13.522375 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:21:13.525491 | orchestrator | 2025-06-02 17:21:13.525527 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-06-02 17:21:13.525691 | orchestrator | Monday 02 June 2025 17:21:13 +0000 (0:00:00.804) 0:06:49.518 *********** 2025-06-02 17:21:14.102635 | orchestrator | ok: [testbed-manager] 2025-06-02 17:21:14.167747 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:21:14.582822 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:21:14.583652 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:21:14.584392 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:21:14.587745 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:21:14.588318 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:21:14.590332 | orchestrator | 2025-06-02 17:21:14.590402 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-06-02 17:21:14.591119 | orchestrator | Monday 02 June 2025 17:21:14 +0000 (0:00:01.062) 0:06:50.580 *********** 2025-06-02 17:21:15.001743 | orchestrator | ok: [testbed-manager] 2025-06-02 17:21:15.412899 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:21:15.413077 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:21:15.413562 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:21:15.414515 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:21:15.415649 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:21:15.415942 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:21:15.417382 | orchestrator | 2025-06-02 17:21:15.417642 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-06-02 17:21:15.418770 | orchestrator | Monday 02 June 2025 17:21:15 +0000 (0:00:00.827) 0:06:51.408 *********** 2025-06-02 17:21:15.545212 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:21:15.610318 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:21:15.672971 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:21:15.753554 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:21:15.817743 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:21:15.907985 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:21:15.908661 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:21:15.910492 | orchestrator | 2025-06-02 17:21:15.913614 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-06-02 17:21:15.915463 | orchestrator | Monday 02 June 2025 17:21:15 +0000 (0:00:00.496) 0:06:51.905 *********** 2025-06-02 17:21:17.332700 | orchestrator | ok: [testbed-manager] 2025-06-02 17:21:17.339401 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:21:17.339651 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:21:17.340118 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:21:17.344663 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:21:17.344700 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:21:17.344711 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:21:17.344723 | orchestrator | 2025-06-02 17:21:17.344735 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-06-02 17:21:17.344748 | orchestrator | Monday 02 June 2025 17:21:17 +0000 (0:00:01.423) 0:06:53.328 *********** 2025-06-02 17:21:17.454982 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:21:17.527737 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:21:17.590505 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:21:17.667854 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:21:17.739407 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:21:17.864057 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:21:17.864223 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:21:17.866150 | orchestrator | 2025-06-02 17:21:17.866182 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-06-02 17:21:17.866278 | orchestrator | Monday 02 June 2025 17:21:17 +0000 (0:00:00.526) 0:06:53.855 *********** 2025-06-02 17:21:25.949635 | orchestrator | ok: [testbed-manager] 2025-06-02 17:21:25.950360 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:21:25.951645 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:21:25.952162 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:21:25.954219 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:21:25.954876 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:21:25.955469 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:21:25.956346 | orchestrator | 2025-06-02 17:21:25.956734 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-06-02 17:21:25.957479 | orchestrator | Monday 02 June 2025 17:21:25 +0000 (0:00:08.091) 0:07:01.946 *********** 2025-06-02 17:21:27.347295 | orchestrator | ok: [testbed-manager] 2025-06-02 17:21:27.347391 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:21:27.348666 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:21:27.352254 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:21:27.352362 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:21:27.352374 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:21:27.352445 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:21:27.353092 | orchestrator | 2025-06-02 17:21:27.354160 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-06-02 17:21:27.354839 | orchestrator | Monday 02 June 2025 17:21:27 +0000 (0:00:01.400) 0:07:03.346 *********** 2025-06-02 17:21:29.136041 | orchestrator | ok: [testbed-manager] 2025-06-02 17:21:29.136729 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:21:29.138171 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:21:29.140669 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:21:29.141501 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:21:29.141819 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:21:29.142641 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:21:29.143238 | orchestrator | 2025-06-02 17:21:29.143972 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-06-02 17:21:29.144746 | orchestrator | Monday 02 June 2025 17:21:29 +0000 (0:00:01.784) 0:07:05.131 *********** 2025-06-02 17:21:30.976233 | orchestrator | ok: [testbed-manager] 2025-06-02 17:21:30.976557 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:21:30.979500 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:21:30.982601 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:21:30.982635 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:21:30.982639 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:21:30.982758 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:21:30.983933 | orchestrator | 2025-06-02 17:21:30.984788 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-02 17:21:30.985769 | orchestrator | Monday 02 June 2025 17:21:30 +0000 (0:00:01.841) 0:07:06.973 *********** 2025-06-02 17:21:31.435875 | orchestrator | ok: [testbed-manager] 2025-06-02 17:21:31.871077 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:21:31.871917 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:21:31.873390 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:21:31.873984 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:21:31.874875 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:21:31.875438 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:21:31.876619 | orchestrator | 2025-06-02 17:21:31.877602 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-02 17:21:31.878309 | orchestrator | Monday 02 June 2025 17:21:31 +0000 (0:00:00.896) 0:07:07.869 *********** 2025-06-02 17:21:32.005648 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:21:32.072438 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:21:32.138435 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:21:32.215754 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:21:32.277069 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:21:32.706958 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:21:32.708552 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:21:32.709306 | orchestrator | 2025-06-02 17:21:32.709863 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-06-02 17:21:32.710800 | orchestrator | Monday 02 June 2025 17:21:32 +0000 (0:00:00.835) 0:07:08.705 *********** 2025-06-02 17:21:32.836564 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:21:32.912602 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:21:32.974672 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:21:33.039948 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:21:33.116763 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:21:33.217735 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:21:33.218649 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:21:33.220108 | orchestrator | 2025-06-02 17:21:33.223887 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-06-02 17:21:33.223926 | orchestrator | Monday 02 June 2025 17:21:33 +0000 (0:00:00.510) 0:07:09.215 *********** 2025-06-02 17:21:33.363497 | orchestrator | ok: [testbed-manager] 2025-06-02 17:21:33.442750 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:21:33.515930 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:21:33.591788 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:21:33.851561 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:21:33.965928 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:21:33.967588 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:21:33.970575 | orchestrator | 2025-06-02 17:21:33.970611 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-06-02 17:21:33.971193 | orchestrator | Monday 02 June 2025 17:21:33 +0000 (0:00:00.747) 0:07:09.963 *********** 2025-06-02 17:21:34.104760 | orchestrator | ok: [testbed-manager] 2025-06-02 17:21:34.171964 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:21:34.238493 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:21:34.307840 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:21:34.373980 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:21:34.497838 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:21:34.499539 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:21:34.500681 | orchestrator | 2025-06-02 17:21:34.501423 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-06-02 17:21:34.503163 | orchestrator | Monday 02 June 2025 17:21:34 +0000 (0:00:00.532) 0:07:10.495 *********** 2025-06-02 17:21:34.637005 | orchestrator | ok: [testbed-manager] 2025-06-02 17:21:34.706485 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:21:34.779269 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:21:34.860731 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:21:34.939787 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:21:35.062762 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:21:35.063910 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:21:35.065575 | orchestrator | 2025-06-02 17:21:35.069740 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-06-02 17:21:35.070418 | orchestrator | Monday 02 June 2025 17:21:35 +0000 (0:00:00.566) 0:07:11.062 *********** 2025-06-02 17:21:40.649038 | orchestrator | ok: [testbed-manager] 2025-06-02 17:21:40.650219 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:21:40.651305 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:21:40.654393 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:21:40.654434 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:21:40.654447 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:21:40.654458 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:21:40.654469 | orchestrator | 2025-06-02 17:21:40.654891 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-06-02 17:21:40.655573 | orchestrator | Monday 02 June 2025 17:21:40 +0000 (0:00:05.584) 0:07:16.646 *********** 2025-06-02 17:21:40.785911 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:21:40.852249 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:21:40.922262 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:21:41.007920 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:21:41.074945 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:21:41.200292 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:21:41.201649 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:21:41.201803 | orchestrator | 2025-06-02 17:21:41.205443 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-06-02 17:21:41.206526 | orchestrator | Monday 02 June 2025 17:21:41 +0000 (0:00:00.550) 0:07:17.197 *********** 2025-06-02 17:21:42.270612 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:21:42.270816 | orchestrator | 2025-06-02 17:21:42.270936 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-06-02 17:21:42.271631 | orchestrator | Monday 02 June 2025 17:21:42 +0000 (0:00:01.073) 0:07:18.270 *********** 2025-06-02 17:21:44.013971 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:21:44.014172 | orchestrator | ok: [testbed-manager] 2025-06-02 17:21:44.015365 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:21:44.016502 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:21:44.017469 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:21:44.018365 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:21:44.019710 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:21:44.020207 | orchestrator | 2025-06-02 17:21:44.020959 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-06-02 17:21:44.021767 | orchestrator | Monday 02 June 2025 17:21:44 +0000 (0:00:01.738) 0:07:20.009 *********** 2025-06-02 17:21:45.162616 | orchestrator | ok: [testbed-manager] 2025-06-02 17:21:45.163297 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:21:45.163431 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:21:45.163514 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:21:45.164083 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:21:45.164365 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:21:45.164996 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:21:45.165227 | orchestrator | 2025-06-02 17:21:45.165647 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-06-02 17:21:45.166129 | orchestrator | Monday 02 June 2025 17:21:45 +0000 (0:00:01.152) 0:07:21.161 *********** 2025-06-02 17:21:45.852917 | orchestrator | ok: [testbed-manager] 2025-06-02 17:21:46.301704 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:21:46.301816 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:21:46.301830 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:21:46.302183 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:21:46.303145 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:21:46.303877 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:21:46.304959 | orchestrator | 2025-06-02 17:21:46.306293 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-06-02 17:21:46.306913 | orchestrator | Monday 02 June 2025 17:21:46 +0000 (0:00:01.132) 0:07:22.294 *********** 2025-06-02 17:21:48.028244 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-02 17:21:48.028753 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-02 17:21:48.030485 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-02 17:21:48.031393 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-02 17:21:48.033543 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-02 17:21:48.034262 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-02 17:21:48.035236 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-02 17:21:48.036612 | orchestrator | 2025-06-02 17:21:48.037310 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-06-02 17:21:48.038076 | orchestrator | Monday 02 June 2025 17:21:48 +0000 (0:00:01.731) 0:07:24.025 *********** 2025-06-02 17:21:48.918079 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:21:48.920272 | orchestrator | 2025-06-02 17:21:48.921265 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-06-02 17:21:48.922216 | orchestrator | Monday 02 June 2025 17:21:48 +0000 (0:00:00.889) 0:07:24.915 *********** 2025-06-02 17:21:57.535454 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:21:57.536850 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:21:57.537769 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:21:57.538582 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:21:57.539854 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:21:57.540567 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:21:57.541373 | orchestrator | changed: [testbed-manager] 2025-06-02 17:21:57.541470 | orchestrator | 2025-06-02 17:21:57.542224 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-06-02 17:21:57.542939 | orchestrator | Monday 02 June 2025 17:21:57 +0000 (0:00:08.615) 0:07:33.531 *********** 2025-06-02 17:21:59.254618 | orchestrator | ok: [testbed-manager] 2025-06-02 17:21:59.254994 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:21:59.255557 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:21:59.259682 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:21:59.259820 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:21:59.259837 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:21:59.259849 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:21:59.259860 | orchestrator | 2025-06-02 17:21:59.259942 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-06-02 17:21:59.260308 | orchestrator | Monday 02 June 2025 17:21:59 +0000 (0:00:01.720) 0:07:35.251 *********** 2025-06-02 17:22:00.615807 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:22:00.616563 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:22:00.617393 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:22:00.618393 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:22:00.619902 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:22:00.620513 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:22:00.621442 | orchestrator | 2025-06-02 17:22:00.622123 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-06-02 17:22:00.622559 | orchestrator | Monday 02 June 2025 17:22:00 +0000 (0:00:01.359) 0:07:36.611 *********** 2025-06-02 17:22:02.073731 | orchestrator | changed: [testbed-manager] 2025-06-02 17:22:02.074989 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:22:02.076506 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:22:02.078120 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:22:02.079285 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:22:02.080569 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:22:02.081034 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:22:02.082631 | orchestrator | 2025-06-02 17:22:02.083132 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-06-02 17:22:02.084197 | orchestrator | 2025-06-02 17:22:02.084930 | orchestrator | TASK [Include hardening role] ************************************************** 2025-06-02 17:22:02.085832 | orchestrator | Monday 02 June 2025 17:22:02 +0000 (0:00:01.460) 0:07:38.072 *********** 2025-06-02 17:22:02.214485 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:22:02.278839 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:22:02.354545 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:22:02.419874 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:22:02.486831 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:22:02.619691 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:22:02.621065 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:22:02.622268 | orchestrator | 2025-06-02 17:22:02.623493 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-06-02 17:22:02.625116 | orchestrator | 2025-06-02 17:22:02.626243 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-06-02 17:22:02.627553 | orchestrator | Monday 02 June 2025 17:22:02 +0000 (0:00:00.544) 0:07:38.617 *********** 2025-06-02 17:22:03.992580 | orchestrator | changed: [testbed-manager] 2025-06-02 17:22:03.993278 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:22:03.994698 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:22:03.995969 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:22:03.996006 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:22:03.996879 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:22:03.997629 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:22:03.998559 | orchestrator | 2025-06-02 17:22:03.999048 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-06-02 17:22:04.000013 | orchestrator | Monday 02 June 2025 17:22:03 +0000 (0:00:01.370) 0:07:39.987 *********** 2025-06-02 17:22:05.733787 | orchestrator | ok: [testbed-manager] 2025-06-02 17:22:05.735166 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:22:05.736260 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:22:05.739015 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:22:05.740474 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:22:05.741402 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:22:05.742601 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:22:05.743605 | orchestrator | 2025-06-02 17:22:05.744373 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-06-02 17:22:05.745050 | orchestrator | Monday 02 June 2025 17:22:05 +0000 (0:00:01.743) 0:07:41.730 *********** 2025-06-02 17:22:05.862220 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:22:05.948269 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:22:06.015655 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:22:06.083821 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:22:06.157138 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:22:06.568288 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:22:06.568869 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:22:06.570561 | orchestrator | 2025-06-02 17:22:06.572681 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-06-02 17:22:06.574476 | orchestrator | Monday 02 June 2025 17:22:06 +0000 (0:00:00.834) 0:07:42.565 *********** 2025-06-02 17:22:07.904228 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:22:07.904393 | orchestrator | changed: [testbed-manager] 2025-06-02 17:22:07.905733 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:22:07.907282 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:22:07.907944 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:22:07.910720 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:22:07.912343 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:22:07.913784 | orchestrator | 2025-06-02 17:22:07.915383 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-06-02 17:22:07.917109 | orchestrator | 2025-06-02 17:22:07.917997 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-06-02 17:22:07.919179 | orchestrator | Monday 02 June 2025 17:22:07 +0000 (0:00:01.333) 0:07:43.899 *********** 2025-06-02 17:22:08.937240 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:22:08.937348 | orchestrator | 2025-06-02 17:22:08.938584 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-06-02 17:22:08.939846 | orchestrator | Monday 02 June 2025 17:22:08 +0000 (0:00:01.036) 0:07:44.936 *********** 2025-06-02 17:22:09.350574 | orchestrator | ok: [testbed-manager] 2025-06-02 17:22:09.800549 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:22:09.801619 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:22:09.802557 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:22:09.803124 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:22:09.804193 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:22:09.804786 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:22:09.805443 | orchestrator | 2025-06-02 17:22:09.805958 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-06-02 17:22:09.806476 | orchestrator | Monday 02 June 2025 17:22:09 +0000 (0:00:00.864) 0:07:45.800 *********** 2025-06-02 17:22:10.919210 | orchestrator | changed: [testbed-manager] 2025-06-02 17:22:10.922188 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:22:10.924412 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:22:10.924458 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:22:10.925188 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:22:10.930301 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:22:10.930334 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:22:10.930340 | orchestrator | 2025-06-02 17:22:10.930347 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-06-02 17:22:10.930459 | orchestrator | Monday 02 June 2025 17:22:10 +0000 (0:00:01.114) 0:07:46.915 *********** 2025-06-02 17:22:11.960359 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:22:11.963528 | orchestrator | 2025-06-02 17:22:11.964622 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-06-02 17:22:11.965359 | orchestrator | Monday 02 June 2025 17:22:11 +0000 (0:00:01.042) 0:07:47.957 *********** 2025-06-02 17:22:12.795498 | orchestrator | ok: [testbed-manager] 2025-06-02 17:22:12.796222 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:22:12.796874 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:22:12.797946 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:22:12.799515 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:22:12.800635 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:22:12.801099 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:22:12.801806 | orchestrator | 2025-06-02 17:22:12.802540 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-06-02 17:22:12.803364 | orchestrator | Monday 02 June 2025 17:22:12 +0000 (0:00:00.834) 0:07:48.792 *********** 2025-06-02 17:22:13.256858 | orchestrator | changed: [testbed-manager] 2025-06-02 17:22:13.993107 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:22:13.993332 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:22:13.995315 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:22:13.996230 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:22:13.996257 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:22:13.996925 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:22:13.997471 | orchestrator | 2025-06-02 17:22:13.997956 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:22:13.998461 | orchestrator | 2025-06-02 17:22:13 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 17:22:13.998530 | orchestrator | 2025-06-02 17:22:13 | INFO  | Please wait and do not abort execution. 2025-06-02 17:22:13.999329 | orchestrator | testbed-manager : ok=162  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-06-02 17:22:13.999955 | orchestrator | testbed-node-0 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-02 17:22:14.002127 | orchestrator | testbed-node-1 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-02 17:22:14.002268 | orchestrator | testbed-node-2 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-02 17:22:14.003230 | orchestrator | testbed-node-3 : ok=169  changed=63  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-06-02 17:22:14.003639 | orchestrator | testbed-node-4 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-02 17:22:14.004563 | orchestrator | testbed-node-5 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-02 17:22:14.004851 | orchestrator | 2025-06-02 17:22:14.005561 | orchestrator | 2025-06-02 17:22:14.006186 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:22:14.006709 | orchestrator | Monday 02 June 2025 17:22:13 +0000 (0:00:01.200) 0:07:49.993 *********** 2025-06-02 17:22:14.007496 | orchestrator | =============================================================================== 2025-06-02 17:22:14.008154 | orchestrator | osism.commons.packages : Install required packages --------------------- 73.54s 2025-06-02 17:22:14.008627 | orchestrator | osism.commons.packages : Download required packages -------------------- 40.79s 2025-06-02 17:22:14.009167 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.10s 2025-06-02 17:22:14.009891 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.68s 2025-06-02 17:22:14.010310 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 13.24s 2025-06-02 17:22:14.010981 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 12.84s 2025-06-02 17:22:14.011629 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.40s 2025-06-02 17:22:14.011814 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.47s 2025-06-02 17:22:14.012297 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.62s 2025-06-02 17:22:14.012498 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.51s 2025-06-02 17:22:14.012977 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.50s 2025-06-02 17:22:14.013866 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 8.09s 2025-06-02 17:22:14.014242 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.06s 2025-06-02 17:22:14.014338 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.45s 2025-06-02 17:22:14.014642 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.28s 2025-06-02 17:22:14.015036 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 6.92s 2025-06-02 17:22:14.015425 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.09s 2025-06-02 17:22:14.015798 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.76s 2025-06-02 17:22:14.016225 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.58s 2025-06-02 17:22:14.016793 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.58s 2025-06-02 17:22:14.880007 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-06-02 17:22:14.880124 | orchestrator | + osism apply network 2025-06-02 17:22:17.361798 | orchestrator | Registering Redlock._acquired_script 2025-06-02 17:22:17.361898 | orchestrator | Registering Redlock._extend_script 2025-06-02 17:22:17.361912 | orchestrator | Registering Redlock._release_script 2025-06-02 17:22:17.429368 | orchestrator | 2025-06-02 17:22:17 | INFO  | Task e907d201-f5b9-4dbf-8ca4-ded75021bdd0 (network) was prepared for execution. 2025-06-02 17:22:17.429482 | orchestrator | 2025-06-02 17:22:17 | INFO  | It takes a moment until task e907d201-f5b9-4dbf-8ca4-ded75021bdd0 (network) has been started and output is visible here. 2025-06-02 17:22:22.097378 | orchestrator | 2025-06-02 17:22:22.097469 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-06-02 17:22:22.099666 | orchestrator | 2025-06-02 17:22:22.100244 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-06-02 17:22:22.100336 | orchestrator | Monday 02 June 2025 17:22:22 +0000 (0:00:00.289) 0:00:00.289 *********** 2025-06-02 17:22:22.255954 | orchestrator | ok: [testbed-manager] 2025-06-02 17:22:22.340883 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:22:22.419959 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:22:22.498869 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:22:22.690603 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:22:22.827167 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:22:22.828296 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:22:22.831967 | orchestrator | 2025-06-02 17:22:22.831998 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-06-02 17:22:22.832012 | orchestrator | Monday 02 June 2025 17:22:22 +0000 (0:00:00.729) 0:00:01.019 *********** 2025-06-02 17:22:24.078560 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:22:24.079738 | orchestrator | 2025-06-02 17:22:24.080727 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-06-02 17:22:24.081914 | orchestrator | Monday 02 June 2025 17:22:24 +0000 (0:00:01.250) 0:00:02.269 *********** 2025-06-02 17:22:25.992593 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:22:25.993666 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:22:25.993744 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:22:25.995843 | orchestrator | ok: [testbed-manager] 2025-06-02 17:22:25.997411 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:22:25.999518 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:22:26.000757 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:22:26.002463 | orchestrator | 2025-06-02 17:22:26.003114 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-06-02 17:22:26.005281 | orchestrator | Monday 02 June 2025 17:22:25 +0000 (0:00:01.917) 0:00:04.186 *********** 2025-06-02 17:22:27.738463 | orchestrator | ok: [testbed-manager] 2025-06-02 17:22:27.739443 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:22:27.740787 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:22:27.744331 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:22:27.744365 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:22:27.745477 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:22:27.745572 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:22:27.746180 | orchestrator | 2025-06-02 17:22:27.746751 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-06-02 17:22:27.747503 | orchestrator | Monday 02 June 2025 17:22:27 +0000 (0:00:01.740) 0:00:05.926 *********** 2025-06-02 17:22:28.274374 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-06-02 17:22:28.274556 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-06-02 17:22:28.720867 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-06-02 17:22:28.721026 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-06-02 17:22:28.721474 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-06-02 17:22:28.721567 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-06-02 17:22:28.722377 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-06-02 17:22:28.722586 | orchestrator | 2025-06-02 17:22:28.723123 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-06-02 17:22:28.723533 | orchestrator | Monday 02 June 2025 17:22:28 +0000 (0:00:00.988) 0:00:06.915 *********** 2025-06-02 17:22:32.212268 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 17:22:32.213236 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 17:22:32.214434 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-02 17:22:32.216295 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-02 17:22:32.217025 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-02 17:22:32.217537 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-02 17:22:32.218394 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-02 17:22:32.218946 | orchestrator | 2025-06-02 17:22:32.219666 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-06-02 17:22:32.220453 | orchestrator | Monday 02 June 2025 17:22:32 +0000 (0:00:03.485) 0:00:10.401 *********** 2025-06-02 17:22:33.672319 | orchestrator | changed: [testbed-manager] 2025-06-02 17:22:33.672881 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:22:33.675949 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:22:33.675978 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:22:33.675990 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:22:33.676538 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:22:33.677064 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:22:33.677688 | orchestrator | 2025-06-02 17:22:33.678318 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-06-02 17:22:33.679127 | orchestrator | Monday 02 June 2025 17:22:33 +0000 (0:00:01.463) 0:00:11.865 *********** 2025-06-02 17:22:35.646681 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 17:22:35.646904 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-02 17:22:35.650092 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 17:22:35.650144 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-02 17:22:35.650157 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-02 17:22:35.651268 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-02 17:22:35.652261 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-02 17:22:35.653428 | orchestrator | 2025-06-02 17:22:35.654244 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-06-02 17:22:35.655243 | orchestrator | Monday 02 June 2025 17:22:35 +0000 (0:00:01.973) 0:00:13.839 *********** 2025-06-02 17:22:36.088922 | orchestrator | ok: [testbed-manager] 2025-06-02 17:22:36.374655 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:22:36.804365 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:22:36.804472 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:22:36.807109 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:22:36.807200 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:22:36.808083 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:22:36.808799 | orchestrator | 2025-06-02 17:22:36.809752 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-06-02 17:22:36.810459 | orchestrator | Monday 02 June 2025 17:22:36 +0000 (0:00:01.154) 0:00:14.993 *********** 2025-06-02 17:22:36.971404 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:22:37.058240 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:22:37.143113 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:22:37.226770 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:22:37.311149 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:22:37.460501 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:22:37.460643 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:22:37.461653 | orchestrator | 2025-06-02 17:22:37.462457 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-06-02 17:22:37.463403 | orchestrator | Monday 02 June 2025 17:22:37 +0000 (0:00:00.660) 0:00:15.654 *********** 2025-06-02 17:22:39.581109 | orchestrator | ok: [testbed-manager] 2025-06-02 17:22:39.583160 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:22:39.584747 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:22:39.586420 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:22:39.588169 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:22:39.591163 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:22:39.592121 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:22:39.593038 | orchestrator | 2025-06-02 17:22:39.594330 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-06-02 17:22:39.595200 | orchestrator | Monday 02 June 2025 17:22:39 +0000 (0:00:02.115) 0:00:17.770 *********** 2025-06-02 17:22:39.839990 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:22:39.926258 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:22:40.012454 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:22:40.093301 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:22:40.515161 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:22:40.516188 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:22:40.517953 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-06-02 17:22:40.518963 | orchestrator | 2025-06-02 17:22:40.520490 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-06-02 17:22:40.521845 | orchestrator | Monday 02 June 2025 17:22:40 +0000 (0:00:00.938) 0:00:18.708 *********** 2025-06-02 17:22:42.195204 | orchestrator | ok: [testbed-manager] 2025-06-02 17:22:42.196560 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:22:42.199513 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:22:42.200938 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:22:42.202383 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:22:42.203877 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:22:42.204847 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:22:42.205720 | orchestrator | 2025-06-02 17:22:42.206555 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-06-02 17:22:42.207203 | orchestrator | Monday 02 June 2025 17:22:42 +0000 (0:00:01.676) 0:00:20.385 *********** 2025-06-02 17:22:43.449846 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:22:43.452625 | orchestrator | 2025-06-02 17:22:43.452691 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-06-02 17:22:43.453635 | orchestrator | Monday 02 June 2025 17:22:43 +0000 (0:00:01.254) 0:00:21.639 *********** 2025-06-02 17:22:44.593534 | orchestrator | ok: [testbed-manager] 2025-06-02 17:22:44.596347 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:22:44.596907 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:22:44.598650 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:22:44.599505 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:22:44.601020 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:22:44.601818 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:22:44.602540 | orchestrator | 2025-06-02 17:22:44.603375 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-06-02 17:22:44.604231 | orchestrator | Monday 02 June 2025 17:22:44 +0000 (0:00:01.143) 0:00:22.782 *********** 2025-06-02 17:22:44.773125 | orchestrator | ok: [testbed-manager] 2025-06-02 17:22:44.874385 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:22:44.972713 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:22:45.063960 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:22:45.148457 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:22:45.275620 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:22:45.275845 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:22:45.277312 | orchestrator | 2025-06-02 17:22:45.280388 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-06-02 17:22:45.282186 | orchestrator | Monday 02 June 2025 17:22:45 +0000 (0:00:00.683) 0:00:23.466 *********** 2025-06-02 17:22:45.638930 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-02 17:22:45.639095 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-06-02 17:22:46.058759 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-02 17:22:46.059596 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-06-02 17:22:46.061201 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-02 17:22:46.062482 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-06-02 17:22:46.063617 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-02 17:22:46.065217 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-06-02 17:22:46.068178 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-02 17:22:46.068221 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-06-02 17:22:46.554858 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-02 17:22:46.555364 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-06-02 17:22:46.556910 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-02 17:22:46.557786 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-06-02 17:22:46.558527 | orchestrator | 2025-06-02 17:22:46.559819 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-06-02 17:22:46.560585 | orchestrator | Monday 02 June 2025 17:22:46 +0000 (0:00:01.278) 0:00:24.745 *********** 2025-06-02 17:22:46.717698 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:22:46.801980 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:22:46.883912 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:22:46.966151 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:22:47.056831 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:22:47.203814 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:22:47.204398 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:22:47.205493 | orchestrator | 2025-06-02 17:22:47.206548 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-06-02 17:22:47.207771 | orchestrator | Monday 02 June 2025 17:22:47 +0000 (0:00:00.652) 0:00:25.397 *********** 2025-06-02 17:22:50.832544 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-3, testbed-node-1, testbed-node-0, testbed-node-5, testbed-node-2, testbed-node-4 2025-06-02 17:22:50.833482 | orchestrator | 2025-06-02 17:22:50.838267 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-06-02 17:22:50.839064 | orchestrator | Monday 02 June 2025 17:22:50 +0000 (0:00:03.624) 0:00:29.022 *********** 2025-06-02 17:22:56.148832 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-06-02 17:22:56.149445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-06-02 17:22:56.150548 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-06-02 17:22:56.152127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-06-02 17:22:56.153933 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-06-02 17:22:56.157332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-06-02 17:22:56.157385 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-06-02 17:22:56.157420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-06-02 17:22:56.157434 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-06-02 17:22:56.157451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-06-02 17:22:56.158146 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-06-02 17:22:56.159159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-06-02 17:22:56.159605 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-06-02 17:22:56.160461 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-06-02 17:22:56.161588 | orchestrator | 2025-06-02 17:22:56.163100 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-06-02 17:22:56.164013 | orchestrator | Monday 02 June 2025 17:22:56 +0000 (0:00:05.296) 0:00:34.319 *********** 2025-06-02 17:23:01.376316 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-06-02 17:23:01.377363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-06-02 17:23:01.378554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-06-02 17:23:01.379845 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-06-02 17:23:01.380858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-06-02 17:23:01.382306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-06-02 17:23:01.383173 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-06-02 17:23:01.383976 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-06-02 17:23:01.385076 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-06-02 17:23:01.385541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-06-02 17:23:01.386460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-06-02 17:23:01.386956 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-06-02 17:23:01.388128 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-06-02 17:23:01.389006 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-06-02 17:23:01.389691 | orchestrator | 2025-06-02 17:23:01.390178 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-06-02 17:23:01.391212 | orchestrator | Monday 02 June 2025 17:23:01 +0000 (0:00:05.247) 0:00:39.567 *********** 2025-06-02 17:23:02.671132 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:23:02.674758 | orchestrator | 2025-06-02 17:23:02.674813 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-06-02 17:23:02.674826 | orchestrator | Monday 02 June 2025 17:23:02 +0000 (0:00:01.294) 0:00:40.861 *********** 2025-06-02 17:23:03.142163 | orchestrator | ok: [testbed-manager] 2025-06-02 17:23:03.437218 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:23:03.872549 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:23:03.873799 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:23:03.875214 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:23:03.876427 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:23:03.877648 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:23:03.881165 | orchestrator | 2025-06-02 17:23:03.883660 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-06-02 17:23:03.884372 | orchestrator | Monday 02 June 2025 17:23:03 +0000 (0:00:01.200) 0:00:42.062 *********** 2025-06-02 17:23:03.952718 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-02 17:23:04.075669 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-02 17:23:04.075871 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-02 17:23:04.076899 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-02 17:23:04.079405 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-02 17:23:04.079429 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-02 17:23:04.079637 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-02 17:23:04.080572 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-02 17:23:04.175442 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:23:04.175735 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-02 17:23:04.179134 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-02 17:23:04.179187 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-02 17:23:04.179200 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-02 17:23:04.266771 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:23:04.266979 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-02 17:23:04.268378 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-02 17:23:04.272179 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-02 17:23:04.272228 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-02 17:23:04.376393 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:23:04.377899 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-02 17:23:04.381468 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-02 17:23:04.381565 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-02 17:23:04.381673 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-02 17:23:04.677984 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:23:04.678978 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-02 17:23:04.679466 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-02 17:23:04.683216 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-02 17:23:04.683258 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-02 17:23:06.010391 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:23:06.010555 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:23:06.011436 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-02 17:23:06.012011 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-02 17:23:06.012392 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-02 17:23:06.012616 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-02 17:23:06.013091 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:23:06.013471 | orchestrator | 2025-06-02 17:23:06.013944 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-06-02 17:23:06.014464 | orchestrator | Monday 02 June 2025 17:23:05 +0000 (0:00:02.138) 0:00:44.200 *********** 2025-06-02 17:23:06.182188 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:23:06.284098 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:23:06.367903 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:23:06.456194 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:23:06.542248 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:23:06.670591 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:23:06.672539 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:23:06.672650 | orchestrator | 2025-06-02 17:23:06.676955 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-06-02 17:23:06.678485 | orchestrator | Monday 02 June 2025 17:23:06 +0000 (0:00:00.664) 0:00:44.865 *********** 2025-06-02 17:23:06.840262 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:23:07.126147 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:23:07.206404 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:23:07.310795 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:23:07.404390 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:23:07.448589 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:23:07.449662 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:23:07.449696 | orchestrator | 2025-06-02 17:23:07.450369 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:23:07.450926 | orchestrator | 2025-06-02 17:23:07 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 17:23:07.450953 | orchestrator | 2025-06-02 17:23:07 | INFO  | Please wait and do not abort execution. 2025-06-02 17:23:07.452033 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 17:23:07.454391 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 17:23:07.455336 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 17:23:07.456131 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 17:23:07.456516 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 17:23:07.457456 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 17:23:07.458160 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 17:23:07.458654 | orchestrator | 2025-06-02 17:23:07.459408 | orchestrator | 2025-06-02 17:23:07.460133 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:23:07.460764 | orchestrator | Monday 02 June 2025 17:23:07 +0000 (0:00:00.778) 0:00:45.643 *********** 2025-06-02 17:23:07.461569 | orchestrator | =============================================================================== 2025-06-02 17:23:07.462300 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.30s 2025-06-02 17:23:07.462801 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.25s 2025-06-02 17:23:07.463153 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 3.62s 2025-06-02 17:23:07.464073 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.49s 2025-06-02 17:23:07.464545 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.14s 2025-06-02 17:23:07.464809 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.12s 2025-06-02 17:23:07.465466 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.97s 2025-06-02 17:23:07.465573 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.92s 2025-06-02 17:23:07.466144 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.74s 2025-06-02 17:23:07.466560 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.68s 2025-06-02 17:23:07.467092 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.46s 2025-06-02 17:23:07.467430 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.29s 2025-06-02 17:23:07.467837 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.28s 2025-06-02 17:23:07.468362 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.25s 2025-06-02 17:23:07.468701 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.25s 2025-06-02 17:23:07.468962 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.20s 2025-06-02 17:23:07.469503 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.15s 2025-06-02 17:23:07.470259 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.14s 2025-06-02 17:23:07.470441 | orchestrator | osism.commons.network : Create required directories --------------------- 0.99s 2025-06-02 17:23:07.470680 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.94s 2025-06-02 17:23:08.167830 | orchestrator | + osism apply wireguard 2025-06-02 17:23:09.952650 | orchestrator | Registering Redlock._acquired_script 2025-06-02 17:23:09.952764 | orchestrator | Registering Redlock._extend_script 2025-06-02 17:23:09.952787 | orchestrator | Registering Redlock._release_script 2025-06-02 17:23:10.017518 | orchestrator | 2025-06-02 17:23:10 | INFO  | Task f81ce2c9-ae74-4a17-afe7-25c77bd1c460 (wireguard) was prepared for execution. 2025-06-02 17:23:10.017621 | orchestrator | 2025-06-02 17:23:10 | INFO  | It takes a moment until task f81ce2c9-ae74-4a17-afe7-25c77bd1c460 (wireguard) has been started and output is visible here. 2025-06-02 17:23:14.225500 | orchestrator | 2025-06-02 17:23:14.226228 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-06-02 17:23:14.227832 | orchestrator | 2025-06-02 17:23:14.230919 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-06-02 17:23:14.232310 | orchestrator | Monday 02 June 2025 17:23:14 +0000 (0:00:00.229) 0:00:00.229 *********** 2025-06-02 17:23:15.846209 | orchestrator | ok: [testbed-manager] 2025-06-02 17:23:15.846942 | orchestrator | 2025-06-02 17:23:15.847111 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-06-02 17:23:15.848647 | orchestrator | Monday 02 June 2025 17:23:15 +0000 (0:00:01.620) 0:00:01.849 *********** 2025-06-02 17:23:22.574312 | orchestrator | changed: [testbed-manager] 2025-06-02 17:23:22.574522 | orchestrator | 2025-06-02 17:23:22.576863 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-06-02 17:23:22.578537 | orchestrator | Monday 02 June 2025 17:23:22 +0000 (0:00:06.729) 0:00:08.578 *********** 2025-06-02 17:23:23.153935 | orchestrator | changed: [testbed-manager] 2025-06-02 17:23:23.156625 | orchestrator | 2025-06-02 17:23:23.157934 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-06-02 17:23:23.158957 | orchestrator | Monday 02 June 2025 17:23:23 +0000 (0:00:00.581) 0:00:09.160 *********** 2025-06-02 17:23:23.564371 | orchestrator | changed: [testbed-manager] 2025-06-02 17:23:23.566877 | orchestrator | 2025-06-02 17:23:23.567606 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-06-02 17:23:23.568752 | orchestrator | Monday 02 June 2025 17:23:23 +0000 (0:00:00.409) 0:00:09.570 *********** 2025-06-02 17:23:24.123846 | orchestrator | ok: [testbed-manager] 2025-06-02 17:23:24.124936 | orchestrator | 2025-06-02 17:23:24.126217 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-06-02 17:23:24.127073 | orchestrator | Monday 02 June 2025 17:23:24 +0000 (0:00:00.558) 0:00:10.129 *********** 2025-06-02 17:23:24.701636 | orchestrator | ok: [testbed-manager] 2025-06-02 17:23:24.701755 | orchestrator | 2025-06-02 17:23:24.702400 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-06-02 17:23:24.703214 | orchestrator | Monday 02 June 2025 17:23:24 +0000 (0:00:00.576) 0:00:10.706 *********** 2025-06-02 17:23:25.140686 | orchestrator | ok: [testbed-manager] 2025-06-02 17:23:25.145495 | orchestrator | 2025-06-02 17:23:25.146871 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-06-02 17:23:25.147681 | orchestrator | Monday 02 June 2025 17:23:25 +0000 (0:00:00.441) 0:00:11.147 *********** 2025-06-02 17:23:26.421758 | orchestrator | changed: [testbed-manager] 2025-06-02 17:23:26.422214 | orchestrator | 2025-06-02 17:23:26.423393 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-06-02 17:23:26.424686 | orchestrator | Monday 02 June 2025 17:23:26 +0000 (0:00:01.277) 0:00:12.425 *********** 2025-06-02 17:23:27.407569 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-02 17:23:27.408238 | orchestrator | changed: [testbed-manager] 2025-06-02 17:23:27.409440 | orchestrator | 2025-06-02 17:23:27.409466 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-06-02 17:23:27.410689 | orchestrator | Monday 02 June 2025 17:23:27 +0000 (0:00:00.988) 0:00:13.414 *********** 2025-06-02 17:23:29.186389 | orchestrator | changed: [testbed-manager] 2025-06-02 17:23:29.187883 | orchestrator | 2025-06-02 17:23:29.188921 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-06-02 17:23:29.189426 | orchestrator | Monday 02 June 2025 17:23:29 +0000 (0:00:01.777) 0:00:15.191 *********** 2025-06-02 17:23:30.125753 | orchestrator | changed: [testbed-manager] 2025-06-02 17:23:30.125863 | orchestrator | 2025-06-02 17:23:30.126244 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:23:30.126371 | orchestrator | 2025-06-02 17:23:30 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 17:23:30.126464 | orchestrator | 2025-06-02 17:23:30 | INFO  | Please wait and do not abort execution. 2025-06-02 17:23:30.127486 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:23:30.127897 | orchestrator | 2025-06-02 17:23:30.128326 | orchestrator | 2025-06-02 17:23:30.128778 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:23:30.129196 | orchestrator | Monday 02 June 2025 17:23:30 +0000 (0:00:00.939) 0:00:16.131 *********** 2025-06-02 17:23:30.130124 | orchestrator | =============================================================================== 2025-06-02 17:23:30.130576 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.73s 2025-06-02 17:23:30.131018 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.78s 2025-06-02 17:23:30.131299 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.62s 2025-06-02 17:23:30.131792 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.28s 2025-06-02 17:23:30.132157 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.99s 2025-06-02 17:23:30.132553 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.94s 2025-06-02 17:23:30.132897 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.58s 2025-06-02 17:23:30.133284 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.58s 2025-06-02 17:23:30.134084 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.56s 2025-06-02 17:23:30.134313 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.44s 2025-06-02 17:23:30.134600 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.41s 2025-06-02 17:23:30.814458 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-06-02 17:23:30.851317 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-06-02 17:23:30.851380 | orchestrator | Dload Upload Total Spent Left Speed 2025-06-02 17:23:30.925419 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 202 0 --:--:-- --:--:-- --:--:-- 205 2025-06-02 17:23:30.940044 | orchestrator | + osism apply --environment custom workarounds 2025-06-02 17:23:32.727958 | orchestrator | 2025-06-02 17:23:32 | INFO  | Trying to run play workarounds in environment custom 2025-06-02 17:23:32.732865 | orchestrator | Registering Redlock._acquired_script 2025-06-02 17:23:32.732943 | orchestrator | Registering Redlock._extend_script 2025-06-02 17:23:32.732962 | orchestrator | Registering Redlock._release_script 2025-06-02 17:23:32.794787 | orchestrator | 2025-06-02 17:23:32 | INFO  | Task e61c82d4-77eb-451c-9e10-88373d606b06 (workarounds) was prepared for execution. 2025-06-02 17:23:32.794881 | orchestrator | 2025-06-02 17:23:32 | INFO  | It takes a moment until task e61c82d4-77eb-451c-9e10-88373d606b06 (workarounds) has been started and output is visible here. 2025-06-02 17:23:37.007093 | orchestrator | 2025-06-02 17:23:37.010005 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 17:23:37.010165 | orchestrator | 2025-06-02 17:23:37.010518 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-06-02 17:23:37.012573 | orchestrator | Monday 02 June 2025 17:23:36 +0000 (0:00:00.157) 0:00:00.157 *********** 2025-06-02 17:23:37.183359 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-06-02 17:23:37.286424 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-06-02 17:23:37.369758 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-06-02 17:23:37.454073 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-06-02 17:23:37.653857 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-06-02 17:23:37.814854 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-06-02 17:23:37.815398 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-06-02 17:23:37.816292 | orchestrator | 2025-06-02 17:23:37.817057 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-06-02 17:23:37.817969 | orchestrator | 2025-06-02 17:23:37.818452 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-06-02 17:23:37.819036 | orchestrator | Monday 02 June 2025 17:23:37 +0000 (0:00:00.808) 0:00:00.966 *********** 2025-06-02 17:23:40.402816 | orchestrator | ok: [testbed-manager] 2025-06-02 17:23:40.402904 | orchestrator | 2025-06-02 17:23:40.404139 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-06-02 17:23:40.407053 | orchestrator | 2025-06-02 17:23:40.407078 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-06-02 17:23:40.407091 | orchestrator | Monday 02 June 2025 17:23:40 +0000 (0:00:02.583) 0:00:03.550 *********** 2025-06-02 17:23:42.192486 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:23:42.195837 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:23:42.195911 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:23:42.195926 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:23:42.197058 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:23:42.198210 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:23:42.199930 | orchestrator | 2025-06-02 17:23:42.201092 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-06-02 17:23:42.202322 | orchestrator | 2025-06-02 17:23:42.203169 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-06-02 17:23:42.206127 | orchestrator | Monday 02 June 2025 17:23:42 +0000 (0:00:01.794) 0:00:05.344 *********** 2025-06-02 17:23:43.723659 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-02 17:23:43.727567 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-02 17:23:43.727603 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-02 17:23:43.727636 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-02 17:23:43.728252 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-02 17:23:43.729447 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-02 17:23:43.730254 | orchestrator | 2025-06-02 17:23:43.731419 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-06-02 17:23:43.731893 | orchestrator | Monday 02 June 2025 17:23:43 +0000 (0:00:01.528) 0:00:06.872 *********** 2025-06-02 17:23:47.478430 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:23:47.479485 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:23:47.479516 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:23:47.480538 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:23:47.481222 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:23:47.482815 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:23:47.483939 | orchestrator | 2025-06-02 17:23:47.483964 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-06-02 17:23:47.484752 | orchestrator | Monday 02 June 2025 17:23:47 +0000 (0:00:03.757) 0:00:10.629 *********** 2025-06-02 17:23:47.643171 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:23:47.719655 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:23:47.802296 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:23:47.886301 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:23:48.236761 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:23:48.237080 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:23:48.238851 | orchestrator | 2025-06-02 17:23:48.239835 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-06-02 17:23:48.241468 | orchestrator | 2025-06-02 17:23:48.242248 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-06-02 17:23:48.243796 | orchestrator | Monday 02 June 2025 17:23:48 +0000 (0:00:00.757) 0:00:11.387 *********** 2025-06-02 17:23:49.940849 | orchestrator | changed: [testbed-manager] 2025-06-02 17:23:49.941222 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:23:49.943842 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:23:49.943884 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:23:49.944092 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:23:49.944448 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:23:49.945190 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:23:49.950096 | orchestrator | 2025-06-02 17:23:49.950200 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-06-02 17:23:49.950220 | orchestrator | Monday 02 June 2025 17:23:49 +0000 (0:00:01.701) 0:00:13.089 *********** 2025-06-02 17:23:51.571343 | orchestrator | changed: [testbed-manager] 2025-06-02 17:23:51.572540 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:23:51.573164 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:23:51.577287 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:23:51.578259 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:23:51.578285 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:23:51.582386 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:23:51.582418 | orchestrator | 2025-06-02 17:23:51.582431 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-06-02 17:23:51.583158 | orchestrator | Monday 02 June 2025 17:23:51 +0000 (0:00:01.621) 0:00:14.711 *********** 2025-06-02 17:23:53.170690 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:23:53.170848 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:23:53.171650 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:23:53.173862 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:23:53.175276 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:23:53.176143 | orchestrator | ok: [testbed-manager] 2025-06-02 17:23:53.177397 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:23:53.178858 | orchestrator | 2025-06-02 17:23:53.179730 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-06-02 17:23:53.180679 | orchestrator | Monday 02 June 2025 17:23:53 +0000 (0:00:01.605) 0:00:16.317 *********** 2025-06-02 17:23:55.103797 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:23:55.103904 | orchestrator | changed: [testbed-manager] 2025-06-02 17:23:55.106125 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:23:55.107101 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:23:55.108133 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:23:55.110134 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:23:55.111281 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:23:55.112479 | orchestrator | 2025-06-02 17:23:55.113160 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-06-02 17:23:55.113885 | orchestrator | Monday 02 June 2025 17:23:55 +0000 (0:00:01.929) 0:00:18.246 *********** 2025-06-02 17:23:55.296551 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:23:55.397026 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:23:55.491592 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:23:55.572925 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:23:55.682399 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:23:55.795656 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:23:55.797338 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:23:55.798490 | orchestrator | 2025-06-02 17:23:55.800142 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-06-02 17:23:55.801660 | orchestrator | 2025-06-02 17:23:55.802170 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-06-02 17:23:55.803089 | orchestrator | Monday 02 June 2025 17:23:55 +0000 (0:00:00.701) 0:00:18.948 *********** 2025-06-02 17:23:58.717892 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:23:58.718700 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:23:58.719088 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:23:58.720788 | orchestrator | ok: [testbed-manager] 2025-06-02 17:23:58.723011 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:23:58.724293 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:23:58.725297 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:23:58.726078 | orchestrator | 2025-06-02 17:23:58.727046 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:23:58.727697 | orchestrator | 2025-06-02 17:23:58 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 17:23:58.728175 | orchestrator | 2025-06-02 17:23:58 | INFO  | Please wait and do not abort execution. 2025-06-02 17:23:58.729300 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 17:23:58.731124 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:23:58.734206 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:23:58.734802 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:23:58.735933 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:23:58.736727 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:23:58.738538 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:23:58.739190 | orchestrator | 2025-06-02 17:23:58.740222 | orchestrator | 2025-06-02 17:23:58.741225 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:23:58.741740 | orchestrator | Monday 02 June 2025 17:23:58 +0000 (0:00:02.917) 0:00:21.865 *********** 2025-06-02 17:23:58.742640 | orchestrator | =============================================================================== 2025-06-02 17:23:58.743538 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.76s 2025-06-02 17:23:58.744292 | orchestrator | Install python3-docker -------------------------------------------------- 2.92s 2025-06-02 17:23:58.744673 | orchestrator | Apply netplan configuration --------------------------------------------- 2.58s 2025-06-02 17:23:58.745084 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.93s 2025-06-02 17:23:58.745929 | orchestrator | Apply netplan configuration --------------------------------------------- 1.79s 2025-06-02 17:23:58.746583 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.70s 2025-06-02 17:23:58.747116 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.62s 2025-06-02 17:23:58.747991 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.61s 2025-06-02 17:23:58.748367 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.53s 2025-06-02 17:23:58.749183 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.81s 2025-06-02 17:23:58.750056 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.76s 2025-06-02 17:23:58.751243 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.70s 2025-06-02 17:23:59.404786 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-06-02 17:24:01.152766 | orchestrator | Registering Redlock._acquired_script 2025-06-02 17:24:01.152873 | orchestrator | Registering Redlock._extend_script 2025-06-02 17:24:01.152888 | orchestrator | Registering Redlock._release_script 2025-06-02 17:24:01.211649 | orchestrator | 2025-06-02 17:24:01 | INFO  | Task 4d2d0305-c448-49fa-a562-7fc81dad9214 (reboot) was prepared for execution. 2025-06-02 17:24:01.211724 | orchestrator | 2025-06-02 17:24:01 | INFO  | It takes a moment until task 4d2d0305-c448-49fa-a562-7fc81dad9214 (reboot) has been started and output is visible here. 2025-06-02 17:24:05.224761 | orchestrator | 2025-06-02 17:24:05.225913 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-02 17:24:05.228663 | orchestrator | 2025-06-02 17:24:05.229081 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-02 17:24:05.229809 | orchestrator | Monday 02 June 2025 17:24:05 +0000 (0:00:00.218) 0:00:00.218 *********** 2025-06-02 17:24:05.324983 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:24:05.325155 | orchestrator | 2025-06-02 17:24:05.326159 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-02 17:24:05.327144 | orchestrator | Monday 02 June 2025 17:24:05 +0000 (0:00:00.102) 0:00:00.321 *********** 2025-06-02 17:24:06.251409 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:24:06.252647 | orchestrator | 2025-06-02 17:24:06.253562 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-02 17:24:06.253859 | orchestrator | Monday 02 June 2025 17:24:06 +0000 (0:00:00.926) 0:00:01.247 *********** 2025-06-02 17:24:06.370507 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:24:06.371137 | orchestrator | 2025-06-02 17:24:06.373710 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-02 17:24:06.373745 | orchestrator | 2025-06-02 17:24:06.373986 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-02 17:24:06.374669 | orchestrator | Monday 02 June 2025 17:24:06 +0000 (0:00:00.116) 0:00:01.364 *********** 2025-06-02 17:24:06.475373 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:24:06.477479 | orchestrator | 2025-06-02 17:24:06.477508 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-02 17:24:06.477522 | orchestrator | Monday 02 June 2025 17:24:06 +0000 (0:00:00.107) 0:00:01.472 *********** 2025-06-02 17:24:07.137449 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:24:07.138375 | orchestrator | 2025-06-02 17:24:07.139280 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-02 17:24:07.140519 | orchestrator | Monday 02 June 2025 17:24:07 +0000 (0:00:00.661) 0:00:02.133 *********** 2025-06-02 17:24:07.283683 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:24:07.285494 | orchestrator | 2025-06-02 17:24:07.285924 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-02 17:24:07.286848 | orchestrator | 2025-06-02 17:24:07.288029 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-02 17:24:07.288198 | orchestrator | Monday 02 June 2025 17:24:07 +0000 (0:00:00.142) 0:00:02.276 *********** 2025-06-02 17:24:07.486824 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:24:07.487609 | orchestrator | 2025-06-02 17:24:07.488591 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-02 17:24:07.489693 | orchestrator | Monday 02 June 2025 17:24:07 +0000 (0:00:00.206) 0:00:02.483 *********** 2025-06-02 17:24:08.154825 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:24:08.155505 | orchestrator | 2025-06-02 17:24:08.156353 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-02 17:24:08.157286 | orchestrator | Monday 02 June 2025 17:24:08 +0000 (0:00:00.668) 0:00:03.151 *********** 2025-06-02 17:24:08.293733 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:24:08.293833 | orchestrator | 2025-06-02 17:24:08.293849 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-02 17:24:08.294472 | orchestrator | 2025-06-02 17:24:08.295607 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-02 17:24:08.296567 | orchestrator | Monday 02 June 2025 17:24:08 +0000 (0:00:00.135) 0:00:03.287 *********** 2025-06-02 17:24:08.386359 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:24:08.386428 | orchestrator | 2025-06-02 17:24:08.387092 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-02 17:24:08.387933 | orchestrator | Monday 02 June 2025 17:24:08 +0000 (0:00:00.093) 0:00:03.380 *********** 2025-06-02 17:24:09.058125 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:24:09.058223 | orchestrator | 2025-06-02 17:24:09.060490 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-02 17:24:09.061527 | orchestrator | Monday 02 June 2025 17:24:09 +0000 (0:00:00.672) 0:00:04.053 *********** 2025-06-02 17:24:09.198874 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:24:09.199632 | orchestrator | 2025-06-02 17:24:09.201111 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-02 17:24:09.202383 | orchestrator | 2025-06-02 17:24:09.203368 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-02 17:24:09.204515 | orchestrator | Monday 02 June 2025 17:24:09 +0000 (0:00:00.139) 0:00:04.193 *********** 2025-06-02 17:24:09.313702 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:24:09.314884 | orchestrator | 2025-06-02 17:24:09.317345 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-02 17:24:09.318866 | orchestrator | Monday 02 June 2025 17:24:09 +0000 (0:00:00.115) 0:00:04.309 *********** 2025-06-02 17:24:09.981939 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:24:09.983802 | orchestrator | 2025-06-02 17:24:09.983973 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-02 17:24:09.985216 | orchestrator | Monday 02 June 2025 17:24:09 +0000 (0:00:00.668) 0:00:04.977 *********** 2025-06-02 17:24:10.101215 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:24:10.101557 | orchestrator | 2025-06-02 17:24:10.102467 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-02 17:24:10.103529 | orchestrator | 2025-06-02 17:24:10.104621 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-02 17:24:10.104937 | orchestrator | Monday 02 June 2025 17:24:10 +0000 (0:00:00.116) 0:00:05.093 *********** 2025-06-02 17:24:10.219192 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:24:10.219241 | orchestrator | 2025-06-02 17:24:10.220234 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-02 17:24:10.221206 | orchestrator | Monday 02 June 2025 17:24:10 +0000 (0:00:00.121) 0:00:05.215 *********** 2025-06-02 17:24:10.872431 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:24:10.872606 | orchestrator | 2025-06-02 17:24:10.872865 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-02 17:24:10.872992 | orchestrator | Monday 02 June 2025 17:24:10 +0000 (0:00:00.653) 0:00:05.868 *********** 2025-06-02 17:24:10.908749 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:24:10.909547 | orchestrator | 2025-06-02 17:24:10.910443 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:24:10.911187 | orchestrator | 2025-06-02 17:24:10 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 17:24:10.911555 | orchestrator | 2025-06-02 17:24:10 | INFO  | Please wait and do not abort execution. 2025-06-02 17:24:10.913409 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:24:10.914622 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:24:10.915399 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:24:10.916280 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:24:10.917049 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:24:10.917564 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:24:10.918160 | orchestrator | 2025-06-02 17:24:10.918565 | orchestrator | 2025-06-02 17:24:10.919440 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:24:10.920164 | orchestrator | Monday 02 June 2025 17:24:10 +0000 (0:00:00.037) 0:00:05.905 *********** 2025-06-02 17:24:10.921760 | orchestrator | =============================================================================== 2025-06-02 17:24:10.921820 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.25s 2025-06-02 17:24:10.922304 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.75s 2025-06-02 17:24:10.923355 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.69s 2025-06-02 17:24:11.794275 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-06-02 17:24:13.721685 | orchestrator | Registering Redlock._acquired_script 2025-06-02 17:24:13.721782 | orchestrator | Registering Redlock._extend_script 2025-06-02 17:24:13.721818 | orchestrator | Registering Redlock._release_script 2025-06-02 17:24:13.793052 | orchestrator | 2025-06-02 17:24:13 | INFO  | Task ebc669e2-8aab-48e7-98ef-9bcb20352c5e (wait-for-connection) was prepared for execution. 2025-06-02 17:24:13.793151 | orchestrator | 2025-06-02 17:24:13 | INFO  | It takes a moment until task ebc669e2-8aab-48e7-98ef-9bcb20352c5e (wait-for-connection) has been started and output is visible here. 2025-06-02 17:24:17.805860 | orchestrator | 2025-06-02 17:24:17.807370 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-06-02 17:24:17.807408 | orchestrator | 2025-06-02 17:24:17.807992 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-06-02 17:24:17.809839 | orchestrator | Monday 02 June 2025 17:24:17 +0000 (0:00:00.218) 0:00:00.218 *********** 2025-06-02 17:24:29.554322 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:24:29.554436 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:24:29.554516 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:24:29.554996 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:24:29.556032 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:24:29.556967 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:24:29.557541 | orchestrator | 2025-06-02 17:24:29.558857 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:24:29.558905 | orchestrator | 2025-06-02 17:24:29 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 17:24:29.558920 | orchestrator | 2025-06-02 17:24:29 | INFO  | Please wait and do not abort execution. 2025-06-02 17:24:29.559568 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:24:29.561138 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:24:29.561595 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:24:29.562196 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:24:29.562660 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:24:29.563136 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:24:29.563850 | orchestrator | 2025-06-02 17:24:29.564200 | orchestrator | 2025-06-02 17:24:29.564815 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:24:29.565707 | orchestrator | Monday 02 June 2025 17:24:29 +0000 (0:00:11.749) 0:00:11.967 *********** 2025-06-02 17:24:29.566365 | orchestrator | =============================================================================== 2025-06-02 17:24:29.566510 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.75s 2025-06-02 17:24:30.029525 | orchestrator | + osism apply hddtemp 2025-06-02 17:24:31.564308 | orchestrator | Registering Redlock._acquired_script 2025-06-02 17:24:31.564380 | orchestrator | Registering Redlock._extend_script 2025-06-02 17:24:31.564393 | orchestrator | Registering Redlock._release_script 2025-06-02 17:24:31.616641 | orchestrator | 2025-06-02 17:24:31 | INFO  | Task 3291f1aa-839e-4aea-b63b-d63ac3a9581c (hddtemp) was prepared for execution. 2025-06-02 17:24:31.616722 | orchestrator | 2025-06-02 17:24:31 | INFO  | It takes a moment until task 3291f1aa-839e-4aea-b63b-d63ac3a9581c (hddtemp) has been started and output is visible here. 2025-06-02 17:24:35.775166 | orchestrator | 2025-06-02 17:24:35.778478 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-06-02 17:24:35.778594 | orchestrator | 2025-06-02 17:24:35.778613 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-06-02 17:24:35.779320 | orchestrator | Monday 02 June 2025 17:24:35 +0000 (0:00:00.276) 0:00:00.276 *********** 2025-06-02 17:24:35.946635 | orchestrator | ok: [testbed-manager] 2025-06-02 17:24:36.028368 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:24:36.106641 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:24:36.184971 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:24:36.381664 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:24:36.518247 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:24:36.519133 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:24:36.520228 | orchestrator | 2025-06-02 17:24:36.523529 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-06-02 17:24:36.523582 | orchestrator | Monday 02 June 2025 17:24:36 +0000 (0:00:00.741) 0:00:01.018 *********** 2025-06-02 17:24:37.727189 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:24:37.727876 | orchestrator | 2025-06-02 17:24:37.729241 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-06-02 17:24:37.730224 | orchestrator | Monday 02 June 2025 17:24:37 +0000 (0:00:01.209) 0:00:02.227 *********** 2025-06-02 17:24:39.643352 | orchestrator | ok: [testbed-manager] 2025-06-02 17:24:39.645855 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:24:39.648528 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:24:39.649417 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:24:39.650487 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:24:39.651534 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:24:39.652070 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:24:39.652752 | orchestrator | 2025-06-02 17:24:39.653552 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-06-02 17:24:39.653766 | orchestrator | Monday 02 June 2025 17:24:39 +0000 (0:00:01.918) 0:00:04.145 *********** 2025-06-02 17:24:40.291152 | orchestrator | changed: [testbed-manager] 2025-06-02 17:24:40.379704 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:24:40.838976 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:24:40.839087 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:24:40.839101 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:24:40.839112 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:24:40.840474 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:24:40.841242 | orchestrator | 2025-06-02 17:24:40.842072 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-06-02 17:24:40.842996 | orchestrator | Monday 02 June 2025 17:24:40 +0000 (0:00:01.188) 0:00:05.334 *********** 2025-06-02 17:24:42.072362 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:24:42.073160 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:24:42.073811 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:24:42.074313 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:24:42.075329 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:24:42.075974 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:24:42.076669 | orchestrator | ok: [testbed-manager] 2025-06-02 17:24:42.078613 | orchestrator | 2025-06-02 17:24:42.078698 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-06-02 17:24:42.078713 | orchestrator | Monday 02 June 2025 17:24:42 +0000 (0:00:01.239) 0:00:06.573 *********** 2025-06-02 17:24:42.614290 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:24:42.702695 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:24:42.784508 | orchestrator | changed: [testbed-manager] 2025-06-02 17:24:42.870469 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:24:43.001344 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:24:43.002196 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:24:43.003780 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:24:43.005552 | orchestrator | 2025-06-02 17:24:43.006625 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-06-02 17:24:43.007366 | orchestrator | Monday 02 June 2025 17:24:42 +0000 (0:00:00.926) 0:00:07.500 *********** 2025-06-02 17:24:55.368361 | orchestrator | changed: [testbed-manager] 2025-06-02 17:24:55.368524 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:24:55.368610 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:24:55.370481 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:24:55.370506 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:24:55.373400 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:24:55.375129 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:24:55.375710 | orchestrator | 2025-06-02 17:24:55.376781 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-06-02 17:24:55.377870 | orchestrator | Monday 02 June 2025 17:24:55 +0000 (0:00:12.366) 0:00:19.867 *********** 2025-06-02 17:24:56.817315 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:24:56.817491 | orchestrator | 2025-06-02 17:24:56.818593 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-06-02 17:24:56.819286 | orchestrator | Monday 02 June 2025 17:24:56 +0000 (0:00:01.450) 0:00:21.318 *********** 2025-06-02 17:24:58.745640 | orchestrator | changed: [testbed-manager] 2025-06-02 17:24:58.746556 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:24:58.747250 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:24:58.748123 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:24:58.749731 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:24:58.750750 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:24:58.752404 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:24:58.753159 | orchestrator | 2025-06-02 17:24:58.754333 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:24:58.754417 | orchestrator | 2025-06-02 17:24:58 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 17:24:58.754782 | orchestrator | 2025-06-02 17:24:58 | INFO  | Please wait and do not abort execution. 2025-06-02 17:24:58.755827 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:24:58.756593 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 17:24:58.756978 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 17:24:58.757866 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 17:24:58.758221 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 17:24:58.759086 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 17:24:58.759189 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 17:24:58.760745 | orchestrator | 2025-06-02 17:24:58.762125 | orchestrator | 2025-06-02 17:24:58.762353 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:24:58.763380 | orchestrator | Monday 02 June 2025 17:24:58 +0000 (0:00:01.929) 0:00:23.247 *********** 2025-06-02 17:24:58.763705 | orchestrator | =============================================================================== 2025-06-02 17:24:58.764109 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.37s 2025-06-02 17:24:58.764992 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.93s 2025-06-02 17:24:58.765738 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.92s 2025-06-02 17:24:58.767548 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.45s 2025-06-02 17:24:58.768083 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.24s 2025-06-02 17:24:58.769022 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.21s 2025-06-02 17:24:58.769511 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.19s 2025-06-02 17:24:58.769739 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.93s 2025-06-02 17:24:58.770480 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.74s 2025-06-02 17:24:59.455121 | orchestrator | ++ semver 9.1.0 7.1.1 2025-06-02 17:24:59.515868 | orchestrator | + [[ 1 -ge 0 ]] 2025-06-02 17:24:59.515977 | orchestrator | + sudo systemctl restart manager.service 2025-06-02 17:25:13.893197 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-06-02 17:25:13.893324 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-06-02 17:25:13.893340 | orchestrator | + local max_attempts=60 2025-06-02 17:25:13.893352 | orchestrator | + local name=ceph-ansible 2025-06-02 17:25:13.893364 | orchestrator | + local attempt_num=1 2025-06-02 17:25:13.893375 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 17:25:13.927734 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-02 17:25:13.927825 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-02 17:25:13.927838 | orchestrator | + sleep 5 2025-06-02 17:25:18.935529 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 17:25:18.989948 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-02 17:25:18.990113 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-02 17:25:18.990130 | orchestrator | + sleep 5 2025-06-02 17:25:23.992809 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 17:25:24.015631 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-02 17:25:24.015741 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-02 17:25:24.015756 | orchestrator | + sleep 5 2025-06-02 17:25:29.019043 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 17:25:29.054936 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-02 17:25:29.055023 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-02 17:25:29.055044 | orchestrator | + sleep 5 2025-06-02 17:25:34.059218 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 17:25:34.097256 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-02 17:25:34.097350 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-02 17:25:34.097366 | orchestrator | + sleep 5 2025-06-02 17:25:39.101299 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 17:25:39.141664 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-02 17:25:39.141774 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-02 17:25:39.141803 | orchestrator | + sleep 5 2025-06-02 17:25:44.145947 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 17:25:44.184518 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-02 17:25:44.184622 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-02 17:25:44.184639 | orchestrator | + sleep 5 2025-06-02 17:25:49.191509 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 17:25:49.216485 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-02 17:25:49.216591 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-02 17:25:49.216606 | orchestrator | + sleep 5 2025-06-02 17:25:54.219348 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 17:25:54.239408 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-02 17:25:54.239501 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-02 17:25:54.239517 | orchestrator | + sleep 5 2025-06-02 17:25:59.244717 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 17:25:59.277566 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-02 17:25:59.425034 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-02 17:25:59.425139 | orchestrator | + sleep 5 2025-06-02 17:26:04.283171 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 17:26:04.317668 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-02 17:26:04.317765 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-02 17:26:04.317780 | orchestrator | + sleep 5 2025-06-02 17:26:09.323631 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 17:26:09.361691 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-02 17:26:09.361882 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-02 17:26:09.361900 | orchestrator | + sleep 5 2025-06-02 17:26:14.366073 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 17:26:14.403691 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-02 17:26:14.403797 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-02 17:26:14.403844 | orchestrator | + sleep 5 2025-06-02 17:26:19.408039 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 17:26:19.448927 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-02 17:26:19.448996 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-06-02 17:26:19.449010 | orchestrator | + local max_attempts=60 2025-06-02 17:26:19.449022 | orchestrator | + local name=kolla-ansible 2025-06-02 17:26:19.449033 | orchestrator | + local attempt_num=1 2025-06-02 17:26:19.449986 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-06-02 17:26:19.489013 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-02 17:26:19.489084 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-06-02 17:26:19.489105 | orchestrator | + local max_attempts=60 2025-06-02 17:26:19.489126 | orchestrator | + local name=osism-ansible 2025-06-02 17:26:19.489146 | orchestrator | + local attempt_num=1 2025-06-02 17:26:19.490134 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-06-02 17:26:19.516794 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-02 17:26:19.516904 | orchestrator | + [[ true == \t\r\u\e ]] 2025-06-02 17:26:19.516920 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-06-02 17:26:19.679297 | orchestrator | ARA in ceph-ansible already disabled. 2025-06-02 17:26:19.861203 | orchestrator | ARA in kolla-ansible already disabled. 2025-06-02 17:26:20.033064 | orchestrator | ARA in osism-ansible already disabled. 2025-06-02 17:26:20.234912 | orchestrator | ARA in osism-kubernetes already disabled. 2025-06-02 17:26:20.236224 | orchestrator | + osism apply gather-facts 2025-06-02 17:26:22.153613 | orchestrator | Registering Redlock._acquired_script 2025-06-02 17:26:22.153715 | orchestrator | Registering Redlock._extend_script 2025-06-02 17:26:22.153730 | orchestrator | Registering Redlock._release_script 2025-06-02 17:26:22.213644 | orchestrator | 2025-06-02 17:26:22 | INFO  | Task 0c815e16-9ad4-493c-9610-4f6d1758aa59 (gather-facts) was prepared for execution. 2025-06-02 17:26:22.213733 | orchestrator | 2025-06-02 17:26:22 | INFO  | It takes a moment until task 0c815e16-9ad4-493c-9610-4f6d1758aa59 (gather-facts) has been started and output is visible here. 2025-06-02 17:26:26.334451 | orchestrator | 2025-06-02 17:26:26.334558 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-02 17:26:26.334566 | orchestrator | 2025-06-02 17:26:26.337197 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-02 17:26:26.338694 | orchestrator | Monday 02 June 2025 17:26:26 +0000 (0:00:00.234) 0:00:00.234 *********** 2025-06-02 17:26:32.058631 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:26:32.058971 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:26:32.059607 | orchestrator | ok: [testbed-manager] 2025-06-02 17:26:32.060107 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:26:32.061785 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:26:32.062174 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:26:32.062656 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:26:32.063544 | orchestrator | 2025-06-02 17:26:32.064181 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-02 17:26:32.065028 | orchestrator | 2025-06-02 17:26:32.065385 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-02 17:26:32.066153 | orchestrator | Monday 02 June 2025 17:26:32 +0000 (0:00:05.730) 0:00:05.965 *********** 2025-06-02 17:26:32.219200 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:26:32.295951 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:26:32.374833 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:26:32.465464 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:26:32.564632 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:26:32.618564 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:26:32.618649 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:26:32.618663 | orchestrator | 2025-06-02 17:26:32.618677 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:26:32.618718 | orchestrator | 2025-06-02 17:26:32 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 17:26:32.618733 | orchestrator | 2025-06-02 17:26:32 | INFO  | Please wait and do not abort execution. 2025-06-02 17:26:32.619428 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 17:26:32.620694 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 17:26:32.622605 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 17:26:32.623462 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 17:26:32.624521 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 17:26:32.625059 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 17:26:32.625706 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 17:26:32.626399 | orchestrator | 2025-06-02 17:26:32.627097 | orchestrator | 2025-06-02 17:26:32.627523 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:26:32.628106 | orchestrator | Monday 02 June 2025 17:26:32 +0000 (0:00:00.554) 0:00:06.519 *********** 2025-06-02 17:26:32.628701 | orchestrator | =============================================================================== 2025-06-02 17:26:32.629324 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.73s 2025-06-02 17:26:32.629849 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.55s 2025-06-02 17:26:33.339140 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-06-02 17:26:33.352696 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-06-02 17:26:33.367619 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-06-02 17:26:33.379749 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-06-02 17:26:33.396594 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-06-02 17:26:33.417162 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-06-02 17:26:33.430558 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-06-02 17:26:33.442317 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-06-02 17:26:33.456902 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-06-02 17:26:33.469281 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-06-02 17:26:33.480438 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-06-02 17:26:33.503407 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-06-02 17:26:33.518951 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-06-02 17:26:33.538341 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-06-02 17:26:33.553842 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-06-02 17:26:33.570869 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-06-02 17:26:33.585328 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-06-02 17:26:33.600712 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-06-02 17:26:33.617744 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-06-02 17:26:33.636631 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-06-02 17:26:33.649880 | orchestrator | + [[ false == \t\r\u\e ]] 2025-06-02 17:26:33.955317 | orchestrator | ok: Runtime: 0:20:31.511997 2025-06-02 17:26:34.082369 | 2025-06-02 17:26:34.082523 | TASK [Deploy services] 2025-06-02 17:26:34.618093 | orchestrator | skipping: Conditional result was False 2025-06-02 17:26:34.636720 | 2025-06-02 17:26:34.636912 | TASK [Deploy in a nutshell] 2025-06-02 17:26:35.414715 | orchestrator | 2025-06-02 17:26:35.414985 | orchestrator | # PULL IMAGES 2025-06-02 17:26:35.415011 | orchestrator | 2025-06-02 17:26:35.415025 | orchestrator | + set -e 2025-06-02 17:26:35.415043 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-02 17:26:35.415063 | orchestrator | ++ export INTERACTIVE=false 2025-06-02 17:26:35.415091 | orchestrator | ++ INTERACTIVE=false 2025-06-02 17:26:35.415137 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-02 17:26:35.415161 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-02 17:26:35.415176 | orchestrator | + source /opt/manager-vars.sh 2025-06-02 17:26:35.415188 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-02 17:26:35.415207 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-02 17:26:35.415218 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-02 17:26:35.415236 | orchestrator | ++ CEPH_VERSION=reef 2025-06-02 17:26:35.415248 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-02 17:26:35.415267 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-02 17:26:35.415278 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-02 17:26:35.415293 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-02 17:26:35.415304 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-02 17:26:35.415317 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-02 17:26:35.415328 | orchestrator | ++ export ARA=false 2025-06-02 17:26:35.415339 | orchestrator | ++ ARA=false 2025-06-02 17:26:35.415350 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-02 17:26:35.415361 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-02 17:26:35.415372 | orchestrator | ++ export TEMPEST=false 2025-06-02 17:26:35.415383 | orchestrator | ++ TEMPEST=false 2025-06-02 17:26:35.415393 | orchestrator | ++ export IS_ZUUL=true 2025-06-02 17:26:35.415404 | orchestrator | ++ IS_ZUUL=true 2025-06-02 17:26:35.415415 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.157 2025-06-02 17:26:35.415427 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.157 2025-06-02 17:26:35.415437 | orchestrator | ++ export EXTERNAL_API=false 2025-06-02 17:26:35.415448 | orchestrator | ++ EXTERNAL_API=false 2025-06-02 17:26:35.415459 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-02 17:26:35.415471 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-02 17:26:35.415482 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-02 17:26:35.415492 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-02 17:26:35.415503 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-02 17:26:35.415521 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-02 17:26:35.415533 | orchestrator | + echo 2025-06-02 17:26:35.415544 | orchestrator | + echo '# PULL IMAGES' 2025-06-02 17:26:35.415555 | orchestrator | + echo 2025-06-02 17:26:35.415593 | orchestrator | ++ semver 9.1.0 7.0.0 2025-06-02 17:26:35.485240 | orchestrator | + [[ 1 -ge 0 ]] 2025-06-02 17:26:35.485311 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-06-02 17:26:37.224993 | orchestrator | 2025-06-02 17:26:37 | INFO  | Trying to run play pull-images in environment custom 2025-06-02 17:26:37.229265 | orchestrator | Registering Redlock._acquired_script 2025-06-02 17:26:37.229315 | orchestrator | Registering Redlock._extend_script 2025-06-02 17:26:37.229329 | orchestrator | Registering Redlock._release_script 2025-06-02 17:26:37.296998 | orchestrator | 2025-06-02 17:26:37 | INFO  | Task 320b5b00-07c0-4ff8-a986-df0346b80b43 (pull-images) was prepared for execution. 2025-06-02 17:26:37.297136 | orchestrator | 2025-06-02 17:26:37 | INFO  | It takes a moment until task 320b5b00-07c0-4ff8-a986-df0346b80b43 (pull-images) has been started and output is visible here. 2025-06-02 17:26:41.331871 | orchestrator | 2025-06-02 17:26:41.331990 | orchestrator | PLAY [Pull images] ************************************************************* 2025-06-02 17:26:41.335431 | orchestrator | 2025-06-02 17:26:41.335984 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-06-02 17:26:41.336648 | orchestrator | Monday 02 June 2025 17:26:41 +0000 (0:00:00.160) 0:00:00.160 *********** 2025-06-02 17:27:46.203693 | orchestrator | changed: [testbed-manager] 2025-06-02 17:27:46.203866 | orchestrator | 2025-06-02 17:27:46.203902 | orchestrator | TASK [Pull other images] ******************************************************* 2025-06-02 17:27:46.203933 | orchestrator | Monday 02 June 2025 17:27:46 +0000 (0:01:04.872) 0:01:05.032 *********** 2025-06-02 17:28:41.985456 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-06-02 17:28:41.985604 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-06-02 17:28:41.987328 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-06-02 17:28:41.987909 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-06-02 17:28:41.990070 | orchestrator | changed: [testbed-manager] => (item=common) 2025-06-02 17:28:41.993098 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-06-02 17:28:41.994669 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-06-02 17:28:41.995740 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-06-02 17:28:41.997463 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-06-02 17:28:42.001359 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-06-02 17:28:42.001395 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-06-02 17:28:42.001409 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-06-02 17:28:42.001420 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-06-02 17:28:42.002459 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-06-02 17:28:42.003726 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-06-02 17:28:42.004948 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-06-02 17:28:42.005545 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-06-02 17:28:42.006063 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-06-02 17:28:42.007915 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-06-02 17:28:42.007937 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-06-02 17:28:42.007949 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-06-02 17:28:42.008935 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-06-02 17:28:42.009339 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-06-02 17:28:42.009942 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-06-02 17:28:42.010338 | orchestrator | 2025-06-02 17:28:42.010971 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:28:42.011347 | orchestrator | 2025-06-02 17:28:42 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 17:28:42.011503 | orchestrator | 2025-06-02 17:28:42 | INFO  | Please wait and do not abort execution. 2025-06-02 17:28:42.013142 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:28:42.013163 | orchestrator | 2025-06-02 17:28:42.013176 | orchestrator | 2025-06-02 17:28:42.013194 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:28:42.013782 | orchestrator | Monday 02 June 2025 17:28:41 +0000 (0:00:55.783) 0:02:00.816 *********** 2025-06-02 17:28:42.014504 | orchestrator | =============================================================================== 2025-06-02 17:28:42.014882 | orchestrator | Pull keystone image ---------------------------------------------------- 64.87s 2025-06-02 17:28:42.015459 | orchestrator | Pull other images ------------------------------------------------------ 55.78s 2025-06-02 17:28:44.485240 | orchestrator | 2025-06-02 17:28:44 | INFO  | Trying to run play wipe-partitions in environment custom 2025-06-02 17:28:44.490172 | orchestrator | Registering Redlock._acquired_script 2025-06-02 17:28:44.490221 | orchestrator | Registering Redlock._extend_script 2025-06-02 17:28:44.490234 | orchestrator | Registering Redlock._release_script 2025-06-02 17:28:44.559645 | orchestrator | 2025-06-02 17:28:44 | INFO  | Task 9b534f4e-07f6-40ec-a52d-9c942d72b214 (wipe-partitions) was prepared for execution. 2025-06-02 17:28:44.559789 | orchestrator | 2025-06-02 17:28:44 | INFO  | It takes a moment until task 9b534f4e-07f6-40ec-a52d-9c942d72b214 (wipe-partitions) has been started and output is visible here. 2025-06-02 17:28:48.900647 | orchestrator | 2025-06-02 17:28:48.900824 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-06-02 17:28:48.903081 | orchestrator | 2025-06-02 17:28:48.903204 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-06-02 17:28:48.903221 | orchestrator | Monday 02 June 2025 17:28:48 +0000 (0:00:00.137) 0:00:00.137 *********** 2025-06-02 17:28:49.564024 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:28:49.564179 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:28:49.564209 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:28:49.564225 | orchestrator | 2025-06-02 17:28:49.564516 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-06-02 17:28:49.565966 | orchestrator | Monday 02 June 2025 17:28:49 +0000 (0:00:00.665) 0:00:00.803 *********** 2025-06-02 17:28:49.818331 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:28:49.904406 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:28:49.904529 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:28:49.904645 | orchestrator | 2025-06-02 17:28:49.904736 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-06-02 17:28:49.904759 | orchestrator | Monday 02 June 2025 17:28:49 +0000 (0:00:00.336) 0:00:01.139 *********** 2025-06-02 17:28:50.842562 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:28:50.842732 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:28:50.842890 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:28:50.842909 | orchestrator | 2025-06-02 17:28:50.843201 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-06-02 17:28:50.843452 | orchestrator | Monday 02 June 2025 17:28:50 +0000 (0:00:00.943) 0:00:02.083 *********** 2025-06-02 17:28:51.018930 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:28:51.139638 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:28:51.139860 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:28:51.140204 | orchestrator | 2025-06-02 17:28:51.141443 | orchestrator | TASK [Check device availability] *********************************************** 2025-06-02 17:28:51.141633 | orchestrator | Monday 02 June 2025 17:28:51 +0000 (0:00:00.295) 0:00:02.379 *********** 2025-06-02 17:28:52.344825 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-06-02 17:28:52.344976 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-06-02 17:28:52.344996 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-06-02 17:28:52.345099 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-06-02 17:28:52.345222 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-06-02 17:28:52.345516 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-06-02 17:28:52.345826 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-06-02 17:28:52.350992 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-06-02 17:28:52.351212 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-06-02 17:28:52.354157 | orchestrator | 2025-06-02 17:28:52.360089 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-06-02 17:28:52.360143 | orchestrator | Monday 02 June 2025 17:28:52 +0000 (0:00:01.203) 0:00:03.583 *********** 2025-06-02 17:28:53.681334 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-06-02 17:28:53.682393 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-06-02 17:28:53.682433 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-06-02 17:28:53.682460 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-06-02 17:28:53.682472 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-06-02 17:28:53.683181 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-06-02 17:28:53.684119 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-06-02 17:28:53.684209 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-06-02 17:28:53.684805 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-06-02 17:28:53.685317 | orchestrator | 2025-06-02 17:28:53.685382 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-06-02 17:28:53.685730 | orchestrator | Monday 02 June 2025 17:28:53 +0000 (0:00:01.336) 0:00:04.919 *********** 2025-06-02 17:28:55.849896 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-06-02 17:28:55.849997 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-06-02 17:28:55.851108 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-06-02 17:28:55.851708 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-06-02 17:28:55.852595 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-06-02 17:28:55.853426 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-06-02 17:28:55.854201 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-06-02 17:28:55.855354 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-06-02 17:28:55.855383 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-06-02 17:28:55.856125 | orchestrator | 2025-06-02 17:28:55.856440 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-06-02 17:28:55.857225 | orchestrator | Monday 02 June 2025 17:28:55 +0000 (0:00:02.168) 0:00:07.087 *********** 2025-06-02 17:28:56.446861 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:28:56.449725 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:28:56.450798 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:28:56.453201 | orchestrator | 2025-06-02 17:28:56.456590 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-06-02 17:28:56.456636 | orchestrator | Monday 02 June 2025 17:28:56 +0000 (0:00:00.589) 0:00:07.677 *********** 2025-06-02 17:28:57.051402 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:28:57.056007 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:28:57.057850 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:28:57.058823 | orchestrator | 2025-06-02 17:28:57.060340 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:28:57.060606 | orchestrator | 2025-06-02 17:28:57 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 17:28:57.061226 | orchestrator | 2025-06-02 17:28:57 | INFO  | Please wait and do not abort execution. 2025-06-02 17:28:57.062426 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:28:57.063193 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:28:57.064266 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:28:57.065574 | orchestrator | 2025-06-02 17:28:57.066621 | orchestrator | 2025-06-02 17:28:57.066744 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:28:57.070113 | orchestrator | Monday 02 June 2025 17:28:57 +0000 (0:00:00.611) 0:00:08.289 *********** 2025-06-02 17:28:57.070376 | orchestrator | =============================================================================== 2025-06-02 17:28:57.070886 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.17s 2025-06-02 17:28:57.071128 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.34s 2025-06-02 17:28:57.071493 | orchestrator | Check device availability ----------------------------------------------- 1.20s 2025-06-02 17:28:57.071929 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.94s 2025-06-02 17:28:57.072242 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.67s 2025-06-02 17:28:57.072650 | orchestrator | Request device events from the kernel ----------------------------------- 0.61s 2025-06-02 17:28:57.073023 | orchestrator | Reload udev rules ------------------------------------------------------- 0.59s 2025-06-02 17:28:57.073366 | orchestrator | Remove all rook related logical devices --------------------------------- 0.34s 2025-06-02 17:28:57.073755 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.30s 2025-06-02 17:28:59.027794 | orchestrator | Registering Redlock._acquired_script 2025-06-02 17:28:59.027908 | orchestrator | Registering Redlock._extend_script 2025-06-02 17:28:59.027925 | orchestrator | Registering Redlock._release_script 2025-06-02 17:28:59.099539 | orchestrator | 2025-06-02 17:28:59 | INFO  | Task 89edf195-761a-4d9d-b009-4ef333fd3ad6 (facts) was prepared for execution. 2025-06-02 17:28:59.099627 | orchestrator | 2025-06-02 17:28:59 | INFO  | It takes a moment until task 89edf195-761a-4d9d-b009-4ef333fd3ad6 (facts) has been started and output is visible here. 2025-06-02 17:29:03.745093 | orchestrator | 2025-06-02 17:29:03.748487 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-06-02 17:29:03.751359 | orchestrator | 2025-06-02 17:29:03.752533 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-02 17:29:03.753448 | orchestrator | Monday 02 June 2025 17:29:03 +0000 (0:00:00.306) 0:00:00.306 *********** 2025-06-02 17:29:04.497570 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:29:05.010641 | orchestrator | ok: [testbed-manager] 2025-06-02 17:29:05.010868 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:29:05.010899 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:29:05.010935 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:29:05.010948 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:29:05.011231 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:29:05.013416 | orchestrator | 2025-06-02 17:29:05.015502 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-02 17:29:05.015576 | orchestrator | Monday 02 June 2025 17:29:05 +0000 (0:00:01.265) 0:00:01.572 *********** 2025-06-02 17:29:05.165951 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:29:05.246289 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:29:05.364080 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:29:05.482771 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:29:05.577988 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:29:06.271807 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:29:06.275325 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:29:06.276313 | orchestrator | 2025-06-02 17:29:06.277733 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-02 17:29:06.278929 | orchestrator | 2025-06-02 17:29:06.280080 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-02 17:29:06.281154 | orchestrator | Monday 02 June 2025 17:29:06 +0000 (0:00:01.264) 0:00:02.837 *********** 2025-06-02 17:29:08.272875 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:29:12.501712 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:29:12.503729 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:29:12.505013 | orchestrator | ok: [testbed-manager] 2025-06-02 17:29:12.509067 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:29:12.509725 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:29:12.511269 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:29:12.512761 | orchestrator | 2025-06-02 17:29:12.516903 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-02 17:29:12.518610 | orchestrator | 2025-06-02 17:29:12.520301 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-02 17:29:12.522958 | orchestrator | Monday 02 June 2025 17:29:12 +0000 (0:00:06.229) 0:00:09.067 *********** 2025-06-02 17:29:12.662166 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:29:12.743037 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:29:12.815370 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:29:12.895219 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:29:12.984371 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:29:13.033856 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:29:13.035851 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:29:13.037904 | orchestrator | 2025-06-02 17:29:13.041048 | orchestrator | 2025-06-02 17:29:13 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 17:29:13.041074 | orchestrator | 2025-06-02 17:29:13 | INFO  | Please wait and do not abort execution. 2025-06-02 17:29:13.041273 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:29:13.043572 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:29:13.045311 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:29:13.050315 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:29:13.052480 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:29:13.053257 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:29:13.054349 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:29:13.055155 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:29:13.056104 | orchestrator | 2025-06-02 17:29:13.057383 | orchestrator | 2025-06-02 17:29:13.057477 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:29:13.058339 | orchestrator | Monday 02 June 2025 17:29:13 +0000 (0:00:00.531) 0:00:09.598 *********** 2025-06-02 17:29:13.059240 | orchestrator | =============================================================================== 2025-06-02 17:29:13.059602 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.23s 2025-06-02 17:29:13.060418 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.27s 2025-06-02 17:29:13.061132 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.26s 2025-06-02 17:29:13.061586 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.53s 2025-06-02 17:29:15.441750 | orchestrator | 2025-06-02 17:29:15 | INFO  | Task b9dff883-ddc5-41e5-b9fb-a25101d1591e (ceph-configure-lvm-volumes) was prepared for execution. 2025-06-02 17:29:15.441929 | orchestrator | 2025-06-02 17:29:15 | INFO  | It takes a moment until task b9dff883-ddc5-41e5-b9fb-a25101d1591e (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-06-02 17:29:19.835772 | orchestrator | 2025-06-02 17:29:19.836790 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-02 17:29:19.838712 | orchestrator | 2025-06-02 17:29:19.840013 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-02 17:29:19.841120 | orchestrator | Monday 02 June 2025 17:29:19 +0000 (0:00:00.335) 0:00:00.335 *********** 2025-06-02 17:29:20.093102 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 17:29:20.093210 | orchestrator | 2025-06-02 17:29:20.093228 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-02 17:29:20.093347 | orchestrator | Monday 02 June 2025 17:29:20 +0000 (0:00:00.260) 0:00:00.595 *********** 2025-06-02 17:29:20.332100 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:29:20.333540 | orchestrator | 2025-06-02 17:29:20.333842 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:29:20.334356 | orchestrator | Monday 02 June 2025 17:29:20 +0000 (0:00:00.238) 0:00:00.834 *********** 2025-06-02 17:29:20.699339 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-06-02 17:29:20.700448 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-06-02 17:29:20.702149 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-06-02 17:29:20.705198 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-06-02 17:29:20.705363 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-06-02 17:29:20.705931 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-06-02 17:29:20.706555 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-06-02 17:29:20.707073 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-06-02 17:29:20.707518 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-06-02 17:29:20.708191 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-06-02 17:29:20.708635 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-06-02 17:29:20.709056 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-06-02 17:29:20.709696 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-06-02 17:29:20.710235 | orchestrator | 2025-06-02 17:29:20.710945 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:29:20.711399 | orchestrator | Monday 02 June 2025 17:29:20 +0000 (0:00:00.359) 0:00:01.194 *********** 2025-06-02 17:29:21.103135 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:29:21.103334 | orchestrator | 2025-06-02 17:29:21.103374 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:29:21.103624 | orchestrator | Monday 02 June 2025 17:29:21 +0000 (0:00:00.412) 0:00:01.607 *********** 2025-06-02 17:29:21.275446 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:29:21.278199 | orchestrator | 2025-06-02 17:29:21.281022 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:29:21.282139 | orchestrator | Monday 02 June 2025 17:29:21 +0000 (0:00:00.170) 0:00:01.777 *********** 2025-06-02 17:29:21.445360 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:29:21.445507 | orchestrator | 2025-06-02 17:29:21.445523 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:29:21.445925 | orchestrator | Monday 02 June 2025 17:29:21 +0000 (0:00:00.169) 0:00:01.946 *********** 2025-06-02 17:29:21.628312 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:29:21.628946 | orchestrator | 2025-06-02 17:29:21.630824 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:29:21.630853 | orchestrator | Monday 02 June 2025 17:29:21 +0000 (0:00:00.183) 0:00:02.130 *********** 2025-06-02 17:29:21.811395 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:29:21.812445 | orchestrator | 2025-06-02 17:29:21.813323 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:29:21.814819 | orchestrator | Monday 02 June 2025 17:29:21 +0000 (0:00:00.178) 0:00:02.308 *********** 2025-06-02 17:29:21.971826 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:29:21.975976 | orchestrator | 2025-06-02 17:29:21.977395 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:29:21.978822 | orchestrator | Monday 02 June 2025 17:29:21 +0000 (0:00:00.162) 0:00:02.471 *********** 2025-06-02 17:29:22.167560 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:29:22.170351 | orchestrator | 2025-06-02 17:29:22.172923 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:29:22.174097 | orchestrator | Monday 02 June 2025 17:29:22 +0000 (0:00:00.193) 0:00:02.665 *********** 2025-06-02 17:29:22.346447 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:29:22.346862 | orchestrator | 2025-06-02 17:29:22.347814 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:29:22.348100 | orchestrator | Monday 02 June 2025 17:29:22 +0000 (0:00:00.181) 0:00:02.846 *********** 2025-06-02 17:29:22.776309 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_dc2aa846-a38d-43a1-9fee-c1088582d602) 2025-06-02 17:29:22.776526 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_dc2aa846-a38d-43a1-9fee-c1088582d602) 2025-06-02 17:29:22.778193 | orchestrator | 2025-06-02 17:29:22.778737 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:29:22.779432 | orchestrator | Monday 02 June 2025 17:29:22 +0000 (0:00:00.431) 0:00:03.278 *********** 2025-06-02 17:29:23.172026 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f15aa92f-a864-46a7-a446-d151182076d1) 2025-06-02 17:29:23.172446 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f15aa92f-a864-46a7-a446-d151182076d1) 2025-06-02 17:29:23.175058 | orchestrator | 2025-06-02 17:29:23.175121 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:29:23.175142 | orchestrator | Monday 02 June 2025 17:29:23 +0000 (0:00:00.395) 0:00:03.673 *********** 2025-06-02 17:29:23.784068 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_abb01d95-8fd4-488e-8b6c-7cb2a7271361) 2025-06-02 17:29:23.787597 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_abb01d95-8fd4-488e-8b6c-7cb2a7271361) 2025-06-02 17:29:23.789396 | orchestrator | 2025-06-02 17:29:23.789753 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:29:23.790157 | orchestrator | Monday 02 June 2025 17:29:23 +0000 (0:00:00.611) 0:00:04.285 *********** 2025-06-02 17:29:24.462811 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5d913f80-ed99-4f7f-af77-a272e71d6767) 2025-06-02 17:29:24.464146 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5d913f80-ed99-4f7f-af77-a272e71d6767) 2025-06-02 17:29:24.470125 | orchestrator | 2025-06-02 17:29:24.470211 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:29:24.470227 | orchestrator | Monday 02 June 2025 17:29:24 +0000 (0:00:00.680) 0:00:04.966 *********** 2025-06-02 17:29:25.313314 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-02 17:29:25.314390 | orchestrator | 2025-06-02 17:29:25.316024 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:29:25.320456 | orchestrator | Monday 02 June 2025 17:29:25 +0000 (0:00:00.849) 0:00:05.815 *********** 2025-06-02 17:29:25.710828 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-06-02 17:29:25.712606 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-06-02 17:29:25.715551 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-06-02 17:29:25.716992 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-06-02 17:29:25.717695 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-06-02 17:29:25.719915 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-06-02 17:29:25.719950 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-06-02 17:29:25.721281 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-06-02 17:29:25.722515 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-06-02 17:29:25.724197 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-06-02 17:29:25.724298 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-06-02 17:29:25.725529 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-06-02 17:29:25.729402 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-06-02 17:29:25.729445 | orchestrator | 2025-06-02 17:29:25.729501 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:29:25.731475 | orchestrator | Monday 02 June 2025 17:29:25 +0000 (0:00:00.395) 0:00:06.211 *********** 2025-06-02 17:29:25.905971 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:29:25.907756 | orchestrator | 2025-06-02 17:29:25.911380 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:29:25.912159 | orchestrator | Monday 02 June 2025 17:29:25 +0000 (0:00:00.192) 0:00:06.404 *********** 2025-06-02 17:29:26.151490 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:29:26.153304 | orchestrator | 2025-06-02 17:29:26.155055 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:29:26.157569 | orchestrator | Monday 02 June 2025 17:29:26 +0000 (0:00:00.246) 0:00:06.651 *********** 2025-06-02 17:29:26.431209 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:29:26.433260 | orchestrator | 2025-06-02 17:29:26.434567 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:29:26.435372 | orchestrator | Monday 02 June 2025 17:29:26 +0000 (0:00:00.280) 0:00:06.932 *********** 2025-06-02 17:29:26.668594 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:29:26.670332 | orchestrator | 2025-06-02 17:29:26.671969 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:29:26.673421 | orchestrator | Monday 02 June 2025 17:29:26 +0000 (0:00:00.236) 0:00:07.169 *********** 2025-06-02 17:29:26.885478 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:29:26.889003 | orchestrator | 2025-06-02 17:29:26.891493 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:29:26.892407 | orchestrator | Monday 02 June 2025 17:29:26 +0000 (0:00:00.215) 0:00:07.385 *********** 2025-06-02 17:29:27.114670 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:29:27.115957 | orchestrator | 2025-06-02 17:29:27.116038 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:29:27.116417 | orchestrator | Monday 02 June 2025 17:29:27 +0000 (0:00:00.230) 0:00:07.615 *********** 2025-06-02 17:29:27.320557 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:29:27.321294 | orchestrator | 2025-06-02 17:29:27.323908 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:29:27.324075 | orchestrator | Monday 02 June 2025 17:29:27 +0000 (0:00:00.205) 0:00:07.820 *********** 2025-06-02 17:29:27.502670 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:29:27.503544 | orchestrator | 2025-06-02 17:29:27.506112 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:29:27.506409 | orchestrator | Monday 02 June 2025 17:29:27 +0000 (0:00:00.182) 0:00:08.003 *********** 2025-06-02 17:29:28.683612 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-06-02 17:29:28.683837 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-06-02 17:29:28.683859 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-06-02 17:29:28.684153 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-06-02 17:29:28.684387 | orchestrator | 2025-06-02 17:29:28.685732 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:29:28.686951 | orchestrator | Monday 02 June 2025 17:29:28 +0000 (0:00:01.179) 0:00:09.183 *********** 2025-06-02 17:29:28.891729 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:29:28.892072 | orchestrator | 2025-06-02 17:29:28.893829 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:29:28.894221 | orchestrator | Monday 02 June 2025 17:29:28 +0000 (0:00:00.208) 0:00:09.392 *********** 2025-06-02 17:29:29.132012 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:29:29.134533 | orchestrator | 2025-06-02 17:29:29.139202 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:29:29.139627 | orchestrator | Monday 02 June 2025 17:29:29 +0000 (0:00:00.240) 0:00:09.632 *********** 2025-06-02 17:29:29.423008 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:29:29.423116 | orchestrator | 2025-06-02 17:29:29.423227 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:29:29.426259 | orchestrator | Monday 02 June 2025 17:29:29 +0000 (0:00:00.290) 0:00:09.923 *********** 2025-06-02 17:29:29.638453 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:29:29.638831 | orchestrator | 2025-06-02 17:29:29.639823 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-02 17:29:29.640172 | orchestrator | Monday 02 June 2025 17:29:29 +0000 (0:00:00.217) 0:00:10.141 *********** 2025-06-02 17:29:29.803026 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-06-02 17:29:29.805064 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-06-02 17:29:29.805429 | orchestrator | 2025-06-02 17:29:29.806193 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-02 17:29:29.806418 | orchestrator | Monday 02 June 2025 17:29:29 +0000 (0:00:00.161) 0:00:10.302 *********** 2025-06-02 17:29:29.946168 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:29:29.947038 | orchestrator | 2025-06-02 17:29:29.948844 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-02 17:29:29.950608 | orchestrator | Monday 02 June 2025 17:29:29 +0000 (0:00:00.141) 0:00:10.444 *********** 2025-06-02 17:29:30.103313 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:29:30.103893 | orchestrator | 2025-06-02 17:29:30.105654 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-02 17:29:30.106818 | orchestrator | Monday 02 June 2025 17:29:30 +0000 (0:00:00.160) 0:00:10.604 *********** 2025-06-02 17:29:30.237150 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:29:30.237266 | orchestrator | 2025-06-02 17:29:30.237826 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-02 17:29:30.239085 | orchestrator | Monday 02 June 2025 17:29:30 +0000 (0:00:00.132) 0:00:10.737 *********** 2025-06-02 17:29:30.369315 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:29:30.369544 | orchestrator | 2025-06-02 17:29:30.370130 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-02 17:29:30.370988 | orchestrator | Monday 02 June 2025 17:29:30 +0000 (0:00:00.130) 0:00:10.867 *********** 2025-06-02 17:29:30.534221 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '94958c5d-ab49-5ebf-a5cb-ef67fe0a9704'}}) 2025-06-02 17:29:30.534872 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '42dde184-17ae-50b7-8921-f17969f5efd9'}}) 2025-06-02 17:29:30.536122 | orchestrator | 2025-06-02 17:29:30.536746 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-02 17:29:30.537103 | orchestrator | Monday 02 June 2025 17:29:30 +0000 (0:00:00.167) 0:00:11.035 *********** 2025-06-02 17:29:30.685884 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '94958c5d-ab49-5ebf-a5cb-ef67fe0a9704'}})  2025-06-02 17:29:30.686786 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '42dde184-17ae-50b7-8921-f17969f5efd9'}})  2025-06-02 17:29:30.687346 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:29:30.688057 | orchestrator | 2025-06-02 17:29:30.688847 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-02 17:29:30.692703 | orchestrator | Monday 02 June 2025 17:29:30 +0000 (0:00:00.153) 0:00:11.189 *********** 2025-06-02 17:29:31.009097 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '94958c5d-ab49-5ebf-a5cb-ef67fe0a9704'}})  2025-06-02 17:29:31.009244 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '42dde184-17ae-50b7-8921-f17969f5efd9'}})  2025-06-02 17:29:31.009334 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:29:31.009704 | orchestrator | 2025-06-02 17:29:31.011578 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-02 17:29:31.011596 | orchestrator | Monday 02 June 2025 17:29:31 +0000 (0:00:00.320) 0:00:11.510 *********** 2025-06-02 17:29:31.159837 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '94958c5d-ab49-5ebf-a5cb-ef67fe0a9704'}})  2025-06-02 17:29:31.162156 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '42dde184-17ae-50b7-8921-f17969f5efd9'}})  2025-06-02 17:29:31.162966 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:29:31.166949 | orchestrator | 2025-06-02 17:29:31.169813 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-02 17:29:31.170238 | orchestrator | Monday 02 June 2025 17:29:31 +0000 (0:00:00.152) 0:00:11.662 *********** 2025-06-02 17:29:31.312242 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:29:31.313734 | orchestrator | 2025-06-02 17:29:31.315990 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-02 17:29:31.318150 | orchestrator | Monday 02 June 2025 17:29:31 +0000 (0:00:00.152) 0:00:11.815 *********** 2025-06-02 17:29:31.448463 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:29:31.449954 | orchestrator | 2025-06-02 17:29:31.450968 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-02 17:29:31.451660 | orchestrator | Monday 02 June 2025 17:29:31 +0000 (0:00:00.137) 0:00:11.952 *********** 2025-06-02 17:29:31.580448 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:29:31.581009 | orchestrator | 2025-06-02 17:29:31.581900 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-02 17:29:31.582799 | orchestrator | Monday 02 June 2025 17:29:31 +0000 (0:00:00.131) 0:00:12.083 *********** 2025-06-02 17:29:31.720422 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:29:31.721398 | orchestrator | 2025-06-02 17:29:31.722661 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-02 17:29:31.725823 | orchestrator | Monday 02 June 2025 17:29:31 +0000 (0:00:00.139) 0:00:12.223 *********** 2025-06-02 17:29:31.857608 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:29:31.858744 | orchestrator | 2025-06-02 17:29:31.859747 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-02 17:29:31.860312 | orchestrator | Monday 02 June 2025 17:29:31 +0000 (0:00:00.135) 0:00:12.359 *********** 2025-06-02 17:29:31.996143 | orchestrator | ok: [testbed-node-3] => { 2025-06-02 17:29:31.997823 | orchestrator |  "ceph_osd_devices": { 2025-06-02 17:29:31.997850 | orchestrator |  "sdb": { 2025-06-02 17:29:32.001009 | orchestrator |  "osd_lvm_uuid": "94958c5d-ab49-5ebf-a5cb-ef67fe0a9704" 2025-06-02 17:29:32.001494 | orchestrator |  }, 2025-06-02 17:29:32.002092 | orchestrator |  "sdc": { 2025-06-02 17:29:32.002894 | orchestrator |  "osd_lvm_uuid": "42dde184-17ae-50b7-8921-f17969f5efd9" 2025-06-02 17:29:32.003223 | orchestrator |  } 2025-06-02 17:29:32.004336 | orchestrator |  } 2025-06-02 17:29:32.004774 | orchestrator | } 2025-06-02 17:29:32.005073 | orchestrator | 2025-06-02 17:29:32.005424 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-02 17:29:32.005782 | orchestrator | Monday 02 June 2025 17:29:31 +0000 (0:00:00.138) 0:00:12.497 *********** 2025-06-02 17:29:32.161314 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:29:32.161860 | orchestrator | 2025-06-02 17:29:32.162521 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-02 17:29:32.167238 | orchestrator | Monday 02 June 2025 17:29:32 +0000 (0:00:00.163) 0:00:12.661 *********** 2025-06-02 17:29:32.271569 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:29:32.274404 | orchestrator | 2025-06-02 17:29:32.275692 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-02 17:29:32.276749 | orchestrator | Monday 02 June 2025 17:29:32 +0000 (0:00:00.111) 0:00:12.773 *********** 2025-06-02 17:29:32.396410 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:29:32.398000 | orchestrator | 2025-06-02 17:29:32.399260 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-02 17:29:32.400880 | orchestrator | Monday 02 June 2025 17:29:32 +0000 (0:00:00.123) 0:00:12.896 *********** 2025-06-02 17:29:32.587321 | orchestrator | changed: [testbed-node-3] => { 2025-06-02 17:29:32.587760 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-02 17:29:32.589886 | orchestrator |  "ceph_osd_devices": { 2025-06-02 17:29:32.591495 | orchestrator |  "sdb": { 2025-06-02 17:29:32.591519 | orchestrator |  "osd_lvm_uuid": "94958c5d-ab49-5ebf-a5cb-ef67fe0a9704" 2025-06-02 17:29:32.591728 | orchestrator |  }, 2025-06-02 17:29:32.592919 | orchestrator |  "sdc": { 2025-06-02 17:29:32.593840 | orchestrator |  "osd_lvm_uuid": "42dde184-17ae-50b7-8921-f17969f5efd9" 2025-06-02 17:29:32.594579 | orchestrator |  } 2025-06-02 17:29:32.595899 | orchestrator |  }, 2025-06-02 17:29:32.596361 | orchestrator |  "lvm_volumes": [ 2025-06-02 17:29:32.598230 | orchestrator |  { 2025-06-02 17:29:32.598623 | orchestrator |  "data": "osd-block-94958c5d-ab49-5ebf-a5cb-ef67fe0a9704", 2025-06-02 17:29:32.599131 | orchestrator |  "data_vg": "ceph-94958c5d-ab49-5ebf-a5cb-ef67fe0a9704" 2025-06-02 17:29:32.601152 | orchestrator |  }, 2025-06-02 17:29:32.601209 | orchestrator |  { 2025-06-02 17:29:32.601906 | orchestrator |  "data": "osd-block-42dde184-17ae-50b7-8921-f17969f5efd9", 2025-06-02 17:29:32.602163 | orchestrator |  "data_vg": "ceph-42dde184-17ae-50b7-8921-f17969f5efd9" 2025-06-02 17:29:32.602796 | orchestrator |  } 2025-06-02 17:29:32.603587 | orchestrator |  ] 2025-06-02 17:29:32.604134 | orchestrator |  } 2025-06-02 17:29:32.604388 | orchestrator | } 2025-06-02 17:29:32.605147 | orchestrator | 2025-06-02 17:29:32.606054 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-02 17:29:32.606444 | orchestrator | Monday 02 June 2025 17:29:32 +0000 (0:00:00.192) 0:00:13.089 *********** 2025-06-02 17:29:34.486370 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 17:29:34.487685 | orchestrator | 2025-06-02 17:29:34.488053 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-02 17:29:34.488255 | orchestrator | 2025-06-02 17:29:34.488801 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-02 17:29:34.490072 | orchestrator | Monday 02 June 2025 17:29:34 +0000 (0:00:01.900) 0:00:14.989 *********** 2025-06-02 17:29:34.758990 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-02 17:29:34.763188 | orchestrator | 2025-06-02 17:29:34.763953 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-02 17:29:34.764562 | orchestrator | Monday 02 June 2025 17:29:34 +0000 (0:00:00.271) 0:00:15.261 *********** 2025-06-02 17:29:35.004556 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:29:35.004833 | orchestrator | 2025-06-02 17:29:35.006497 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:29:35.006820 | orchestrator | Monday 02 June 2025 17:29:34 +0000 (0:00:00.244) 0:00:15.505 *********** 2025-06-02 17:29:35.399618 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-06-02 17:29:35.400404 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-06-02 17:29:35.401034 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-06-02 17:29:35.402434 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-06-02 17:29:35.404264 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-06-02 17:29:35.406110 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-06-02 17:29:35.406841 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-06-02 17:29:35.407458 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-06-02 17:29:35.408212 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-06-02 17:29:35.408837 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-06-02 17:29:35.409552 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-06-02 17:29:35.410189 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-06-02 17:29:35.410857 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-06-02 17:29:35.411313 | orchestrator | 2025-06-02 17:29:35.412196 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:29:35.412354 | orchestrator | Monday 02 June 2025 17:29:35 +0000 (0:00:00.394) 0:00:15.899 *********** 2025-06-02 17:29:35.596224 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:29:35.596552 | orchestrator | 2025-06-02 17:29:35.597491 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:29:35.598782 | orchestrator | Monday 02 June 2025 17:29:35 +0000 (0:00:00.197) 0:00:16.097 *********** 2025-06-02 17:29:35.798378 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:29:35.798482 | orchestrator | 2025-06-02 17:29:35.798592 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:29:35.799162 | orchestrator | Monday 02 June 2025 17:29:35 +0000 (0:00:00.201) 0:00:16.298 *********** 2025-06-02 17:29:35.975973 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:29:35.977357 | orchestrator | 2025-06-02 17:29:35.979475 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:29:35.981850 | orchestrator | Monday 02 June 2025 17:29:35 +0000 (0:00:00.177) 0:00:16.476 *********** 2025-06-02 17:29:36.202389 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:29:36.202516 | orchestrator | 2025-06-02 17:29:36.203522 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:29:36.204251 | orchestrator | Monday 02 June 2025 17:29:36 +0000 (0:00:00.227) 0:00:16.703 *********** 2025-06-02 17:29:36.871264 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:29:36.871347 | orchestrator | 2025-06-02 17:29:36.871727 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:29:36.872905 | orchestrator | Monday 02 June 2025 17:29:36 +0000 (0:00:00.667) 0:00:17.370 *********** 2025-06-02 17:29:37.066412 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:29:37.067139 | orchestrator | 2025-06-02 17:29:37.070051 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:29:37.070085 | orchestrator | Monday 02 June 2025 17:29:37 +0000 (0:00:00.196) 0:00:17.567 *********** 2025-06-02 17:29:37.262069 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:29:37.262521 | orchestrator | 2025-06-02 17:29:37.263920 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:29:37.265731 | orchestrator | Monday 02 June 2025 17:29:37 +0000 (0:00:00.196) 0:00:17.763 *********** 2025-06-02 17:29:37.492523 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:29:37.494959 | orchestrator | 2025-06-02 17:29:37.495795 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:29:37.497105 | orchestrator | Monday 02 June 2025 17:29:37 +0000 (0:00:00.230) 0:00:17.993 *********** 2025-06-02 17:29:37.915416 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_7ce206cb-87d4-44fa-8b19-ffddc5f2b300) 2025-06-02 17:29:37.916475 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_7ce206cb-87d4-44fa-8b19-ffddc5f2b300) 2025-06-02 17:29:37.917005 | orchestrator | 2025-06-02 17:29:37.919504 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:29:37.919944 | orchestrator | Monday 02 June 2025 17:29:37 +0000 (0:00:00.421) 0:00:18.415 *********** 2025-06-02 17:29:38.350335 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_37a5ef51-3790-4474-9294-da6668d88e33) 2025-06-02 17:29:38.351507 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_37a5ef51-3790-4474-9294-da6668d88e33) 2025-06-02 17:29:38.354242 | orchestrator | 2025-06-02 17:29:38.354322 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:29:38.354441 | orchestrator | Monday 02 June 2025 17:29:38 +0000 (0:00:00.436) 0:00:18.851 *********** 2025-06-02 17:29:38.805842 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_8b34934e-11eb-4c36-8207-511a42fe0f38) 2025-06-02 17:29:38.806219 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_8b34934e-11eb-4c36-8207-511a42fe0f38) 2025-06-02 17:29:38.807046 | orchestrator | 2025-06-02 17:29:38.807806 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:29:38.808459 | orchestrator | Monday 02 June 2025 17:29:38 +0000 (0:00:00.456) 0:00:19.307 *********** 2025-06-02 17:29:39.278077 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d22e3547-dc50-4b67-b48e-5886da7d5148) 2025-06-02 17:29:39.278242 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d22e3547-dc50-4b67-b48e-5886da7d5148) 2025-06-02 17:29:39.278328 | orchestrator | 2025-06-02 17:29:39.278912 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:29:39.281082 | orchestrator | Monday 02 June 2025 17:29:39 +0000 (0:00:00.472) 0:00:19.780 *********** 2025-06-02 17:29:39.607082 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-02 17:29:39.607315 | orchestrator | 2025-06-02 17:29:39.608357 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:29:39.609053 | orchestrator | Monday 02 June 2025 17:29:39 +0000 (0:00:00.328) 0:00:20.109 *********** 2025-06-02 17:29:39.986560 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-06-02 17:29:39.987149 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-06-02 17:29:39.988266 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-06-02 17:29:39.989352 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-06-02 17:29:39.990481 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-06-02 17:29:39.991131 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-06-02 17:29:39.992165 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-06-02 17:29:39.993300 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-06-02 17:29:39.994143 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-06-02 17:29:39.994766 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-06-02 17:29:39.995887 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-06-02 17:29:39.996362 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-06-02 17:29:39.997213 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-06-02 17:29:39.998134 | orchestrator | 2025-06-02 17:29:39.998851 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:29:39.999448 | orchestrator | Monday 02 June 2025 17:29:39 +0000 (0:00:00.375) 0:00:20.484 *********** 2025-06-02 17:29:40.206447 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:29:40.207160 | orchestrator | 2025-06-02 17:29:40.207895 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:29:40.210073 | orchestrator | Monday 02 June 2025 17:29:40 +0000 (0:00:00.220) 0:00:20.705 *********** 2025-06-02 17:29:40.726809 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:29:40.728198 | orchestrator | 2025-06-02 17:29:40.731592 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:29:40.732311 | orchestrator | Monday 02 June 2025 17:29:40 +0000 (0:00:00.523) 0:00:21.228 *********** 2025-06-02 17:29:40.919249 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:29:40.921890 | orchestrator | 2025-06-02 17:29:40.925001 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:29:40.926070 | orchestrator | Monday 02 June 2025 17:29:40 +0000 (0:00:00.193) 0:00:21.421 *********** 2025-06-02 17:29:41.132132 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:29:41.132818 | orchestrator | 2025-06-02 17:29:41.132980 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:29:41.134129 | orchestrator | Monday 02 June 2025 17:29:41 +0000 (0:00:00.211) 0:00:21.633 *********** 2025-06-02 17:29:41.334480 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:29:41.334659 | orchestrator | 2025-06-02 17:29:41.334738 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:29:41.335901 | orchestrator | Monday 02 June 2025 17:29:41 +0000 (0:00:00.202) 0:00:21.836 *********** 2025-06-02 17:29:41.503442 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:29:41.503713 | orchestrator | 2025-06-02 17:29:41.504125 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:29:41.504558 | orchestrator | Monday 02 June 2025 17:29:41 +0000 (0:00:00.169) 0:00:22.005 *********** 2025-06-02 17:29:41.695330 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:29:41.696107 | orchestrator | 2025-06-02 17:29:41.697036 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:29:41.697730 | orchestrator | Monday 02 June 2025 17:29:41 +0000 (0:00:00.192) 0:00:22.197 *********** 2025-06-02 17:29:41.864758 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:29:41.866373 | orchestrator | 2025-06-02 17:29:41.867597 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:29:41.868754 | orchestrator | Monday 02 June 2025 17:29:41 +0000 (0:00:00.168) 0:00:22.365 *********** 2025-06-02 17:29:42.495722 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-06-02 17:29:42.498474 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-06-02 17:29:42.499824 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-06-02 17:29:42.500946 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-06-02 17:29:42.501941 | orchestrator | 2025-06-02 17:29:42.503067 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:29:42.504006 | orchestrator | Monday 02 June 2025 17:29:42 +0000 (0:00:00.629) 0:00:22.995 *********** 2025-06-02 17:29:42.704823 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:29:42.706873 | orchestrator | 2025-06-02 17:29:42.707880 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:29:42.708955 | orchestrator | Monday 02 June 2025 17:29:42 +0000 (0:00:00.210) 0:00:23.206 *********** 2025-06-02 17:29:42.891604 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:29:42.893564 | orchestrator | 2025-06-02 17:29:42.894831 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:29:42.896335 | orchestrator | Monday 02 June 2025 17:29:42 +0000 (0:00:00.185) 0:00:23.392 *********** 2025-06-02 17:29:43.077375 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:29:43.077478 | orchestrator | 2025-06-02 17:29:43.077749 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:29:43.078335 | orchestrator | Monday 02 June 2025 17:29:43 +0000 (0:00:00.184) 0:00:23.577 *********** 2025-06-02 17:29:43.276916 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:29:43.278465 | orchestrator | 2025-06-02 17:29:43.280474 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-02 17:29:43.281227 | orchestrator | Monday 02 June 2025 17:29:43 +0000 (0:00:00.201) 0:00:23.779 *********** 2025-06-02 17:29:43.565891 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-06-02 17:29:43.569811 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-06-02 17:29:43.569876 | orchestrator | 2025-06-02 17:29:43.571139 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-02 17:29:43.572973 | orchestrator | Monday 02 June 2025 17:29:43 +0000 (0:00:00.289) 0:00:24.068 *********** 2025-06-02 17:29:43.733270 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:29:43.734551 | orchestrator | 2025-06-02 17:29:43.736202 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-02 17:29:43.736924 | orchestrator | Monday 02 June 2025 17:29:43 +0000 (0:00:00.166) 0:00:24.235 *********** 2025-06-02 17:29:43.875079 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:29:43.875425 | orchestrator | 2025-06-02 17:29:43.876258 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-02 17:29:43.880048 | orchestrator | Monday 02 June 2025 17:29:43 +0000 (0:00:00.142) 0:00:24.377 *********** 2025-06-02 17:29:44.004809 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:29:44.004886 | orchestrator | 2025-06-02 17:29:44.004892 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-02 17:29:44.004954 | orchestrator | Monday 02 June 2025 17:29:43 +0000 (0:00:00.128) 0:00:24.506 *********** 2025-06-02 17:29:44.127747 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:29:44.128269 | orchestrator | 2025-06-02 17:29:44.132039 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-02 17:29:44.132415 | orchestrator | Monday 02 June 2025 17:29:44 +0000 (0:00:00.123) 0:00:24.629 *********** 2025-06-02 17:29:44.286314 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'de836c00-0412-5e15-aa8a-abef9bebfb26'}}) 2025-06-02 17:29:44.286739 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c404b240-9cf0-5c0e-97ba-c570a8ba4cd9'}}) 2025-06-02 17:29:44.287741 | orchestrator | 2025-06-02 17:29:44.288259 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-02 17:29:44.289044 | orchestrator | Monday 02 June 2025 17:29:44 +0000 (0:00:00.157) 0:00:24.787 *********** 2025-06-02 17:29:44.429545 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'de836c00-0412-5e15-aa8a-abef9bebfb26'}})  2025-06-02 17:29:44.430149 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c404b240-9cf0-5c0e-97ba-c570a8ba4cd9'}})  2025-06-02 17:29:44.430902 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:29:44.432260 | orchestrator | 2025-06-02 17:29:44.432555 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-02 17:29:44.433225 | orchestrator | Monday 02 June 2025 17:29:44 +0000 (0:00:00.144) 0:00:24.932 *********** 2025-06-02 17:29:44.582390 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'de836c00-0412-5e15-aa8a-abef9bebfb26'}})  2025-06-02 17:29:44.584193 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c404b240-9cf0-5c0e-97ba-c570a8ba4cd9'}})  2025-06-02 17:29:44.585013 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:29:44.586072 | orchestrator | 2025-06-02 17:29:44.587007 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-02 17:29:44.587803 | orchestrator | Monday 02 June 2025 17:29:44 +0000 (0:00:00.152) 0:00:25.084 *********** 2025-06-02 17:29:44.747384 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'de836c00-0412-5e15-aa8a-abef9bebfb26'}})  2025-06-02 17:29:44.749698 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c404b240-9cf0-5c0e-97ba-c570a8ba4cd9'}})  2025-06-02 17:29:44.750591 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:29:44.751019 | orchestrator | 2025-06-02 17:29:44.751844 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-02 17:29:44.752175 | orchestrator | Monday 02 June 2025 17:29:44 +0000 (0:00:00.164) 0:00:25.249 *********** 2025-06-02 17:29:44.889190 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:29:44.890654 | orchestrator | 2025-06-02 17:29:44.892524 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-02 17:29:44.893308 | orchestrator | Monday 02 June 2025 17:29:44 +0000 (0:00:00.141) 0:00:25.391 *********** 2025-06-02 17:29:45.053751 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:29:45.054716 | orchestrator | 2025-06-02 17:29:45.054741 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-02 17:29:45.055913 | orchestrator | Monday 02 June 2025 17:29:45 +0000 (0:00:00.162) 0:00:25.554 *********** 2025-06-02 17:29:45.178294 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:29:45.178503 | orchestrator | 2025-06-02 17:29:45.179714 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-02 17:29:45.180465 | orchestrator | Monday 02 June 2025 17:29:45 +0000 (0:00:00.126) 0:00:25.680 *********** 2025-06-02 17:29:45.447503 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:29:45.448470 | orchestrator | 2025-06-02 17:29:45.448489 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-02 17:29:45.448495 | orchestrator | Monday 02 June 2025 17:29:45 +0000 (0:00:00.267) 0:00:25.947 *********** 2025-06-02 17:29:45.553186 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:29:45.554872 | orchestrator | 2025-06-02 17:29:45.555390 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-02 17:29:45.556520 | orchestrator | Monday 02 June 2025 17:29:45 +0000 (0:00:00.108) 0:00:26.055 *********** 2025-06-02 17:29:45.696556 | orchestrator | ok: [testbed-node-4] => { 2025-06-02 17:29:45.698100 | orchestrator |  "ceph_osd_devices": { 2025-06-02 17:29:45.699264 | orchestrator |  "sdb": { 2025-06-02 17:29:45.701498 | orchestrator |  "osd_lvm_uuid": "de836c00-0412-5e15-aa8a-abef9bebfb26" 2025-06-02 17:29:45.701790 | orchestrator |  }, 2025-06-02 17:29:45.703418 | orchestrator |  "sdc": { 2025-06-02 17:29:45.704031 | orchestrator |  "osd_lvm_uuid": "c404b240-9cf0-5c0e-97ba-c570a8ba4cd9" 2025-06-02 17:29:45.704880 | orchestrator |  } 2025-06-02 17:29:45.705278 | orchestrator |  } 2025-06-02 17:29:45.705807 | orchestrator | } 2025-06-02 17:29:45.706260 | orchestrator | 2025-06-02 17:29:45.706554 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-02 17:29:45.706978 | orchestrator | Monday 02 June 2025 17:29:45 +0000 (0:00:00.142) 0:00:26.198 *********** 2025-06-02 17:29:45.819388 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:29:45.820869 | orchestrator | 2025-06-02 17:29:45.821563 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-02 17:29:45.822962 | orchestrator | Monday 02 June 2025 17:29:45 +0000 (0:00:00.122) 0:00:26.321 *********** 2025-06-02 17:29:45.930111 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:29:45.930738 | orchestrator | 2025-06-02 17:29:45.932579 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-02 17:29:45.933462 | orchestrator | Monday 02 June 2025 17:29:45 +0000 (0:00:00.109) 0:00:26.430 *********** 2025-06-02 17:29:46.040586 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:29:46.041424 | orchestrator | 2025-06-02 17:29:46.042006 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-02 17:29:46.042517 | orchestrator | Monday 02 June 2025 17:29:46 +0000 (0:00:00.109) 0:00:26.540 *********** 2025-06-02 17:29:46.236733 | orchestrator | changed: [testbed-node-4] => { 2025-06-02 17:29:46.237780 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-02 17:29:46.237814 | orchestrator |  "ceph_osd_devices": { 2025-06-02 17:29:46.238100 | orchestrator |  "sdb": { 2025-06-02 17:29:46.240477 | orchestrator |  "osd_lvm_uuid": "de836c00-0412-5e15-aa8a-abef9bebfb26" 2025-06-02 17:29:46.240922 | orchestrator |  }, 2025-06-02 17:29:46.243677 | orchestrator |  "sdc": { 2025-06-02 17:29:46.244475 | orchestrator |  "osd_lvm_uuid": "c404b240-9cf0-5c0e-97ba-c570a8ba4cd9" 2025-06-02 17:29:46.248011 | orchestrator |  } 2025-06-02 17:29:46.249040 | orchestrator |  }, 2025-06-02 17:29:46.249972 | orchestrator |  "lvm_volumes": [ 2025-06-02 17:29:46.251896 | orchestrator |  { 2025-06-02 17:29:46.251968 | orchestrator |  "data": "osd-block-de836c00-0412-5e15-aa8a-abef9bebfb26", 2025-06-02 17:29:46.251989 | orchestrator |  "data_vg": "ceph-de836c00-0412-5e15-aa8a-abef9bebfb26" 2025-06-02 17:29:46.252344 | orchestrator |  }, 2025-06-02 17:29:46.252885 | orchestrator |  { 2025-06-02 17:29:46.253394 | orchestrator |  "data": "osd-block-c404b240-9cf0-5c0e-97ba-c570a8ba4cd9", 2025-06-02 17:29:46.254270 | orchestrator |  "data_vg": "ceph-c404b240-9cf0-5c0e-97ba-c570a8ba4cd9" 2025-06-02 17:29:46.254809 | orchestrator |  } 2025-06-02 17:29:46.259723 | orchestrator |  ] 2025-06-02 17:29:46.259803 | orchestrator |  } 2025-06-02 17:29:46.259820 | orchestrator | } 2025-06-02 17:29:46.259833 | orchestrator | 2025-06-02 17:29:46.259845 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-02 17:29:46.259880 | orchestrator | Monday 02 June 2025 17:29:46 +0000 (0:00:00.196) 0:00:26.737 *********** 2025-06-02 17:29:47.246102 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-02 17:29:47.246787 | orchestrator | 2025-06-02 17:29:47.247751 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-02 17:29:47.249004 | orchestrator | 2025-06-02 17:29:47.249849 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-02 17:29:47.252195 | orchestrator | Monday 02 June 2025 17:29:47 +0000 (0:00:01.009) 0:00:27.746 *********** 2025-06-02 17:29:47.647567 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-02 17:29:47.649568 | orchestrator | 2025-06-02 17:29:47.650793 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-02 17:29:47.652490 | orchestrator | Monday 02 June 2025 17:29:47 +0000 (0:00:00.402) 0:00:28.148 *********** 2025-06-02 17:29:48.156174 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:29:48.156428 | orchestrator | 2025-06-02 17:29:48.157683 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:29:48.158240 | orchestrator | Monday 02 June 2025 17:29:48 +0000 (0:00:00.507) 0:00:28.656 *********** 2025-06-02 17:29:48.501148 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-06-02 17:29:48.502951 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-06-02 17:29:48.502984 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-06-02 17:29:48.504071 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-06-02 17:29:48.505056 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-06-02 17:29:48.506507 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-06-02 17:29:48.507329 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-06-02 17:29:48.508710 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-06-02 17:29:48.509111 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-06-02 17:29:48.509658 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-06-02 17:29:48.510299 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-06-02 17:29:48.511512 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-06-02 17:29:48.511552 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-06-02 17:29:48.511935 | orchestrator | 2025-06-02 17:29:48.513027 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:29:48.513824 | orchestrator | Monday 02 June 2025 17:29:48 +0000 (0:00:00.346) 0:00:29.003 *********** 2025-06-02 17:29:48.699136 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:29:48.700222 | orchestrator | 2025-06-02 17:29:48.700654 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:29:48.701145 | orchestrator | Monday 02 June 2025 17:29:48 +0000 (0:00:00.197) 0:00:29.200 *********** 2025-06-02 17:29:48.930534 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:29:48.930920 | orchestrator | 2025-06-02 17:29:48.931607 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:29:48.932002 | orchestrator | Monday 02 June 2025 17:29:48 +0000 (0:00:00.232) 0:00:29.433 *********** 2025-06-02 17:29:49.128542 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:29:49.129673 | orchestrator | 2025-06-02 17:29:49.130973 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:29:49.131739 | orchestrator | Monday 02 June 2025 17:29:49 +0000 (0:00:00.196) 0:00:29.630 *********** 2025-06-02 17:29:49.316807 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:29:49.316888 | orchestrator | 2025-06-02 17:29:49.319001 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:29:49.319407 | orchestrator | Monday 02 June 2025 17:29:49 +0000 (0:00:00.186) 0:00:29.817 *********** 2025-06-02 17:29:49.533228 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:29:49.534512 | orchestrator | 2025-06-02 17:29:49.536367 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:29:49.536685 | orchestrator | Monday 02 June 2025 17:29:49 +0000 (0:00:00.218) 0:00:30.035 *********** 2025-06-02 17:29:49.728051 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:29:49.728908 | orchestrator | 2025-06-02 17:29:49.729777 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:29:49.730420 | orchestrator | Monday 02 June 2025 17:29:49 +0000 (0:00:00.194) 0:00:30.230 *********** 2025-06-02 17:29:49.914446 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:29:49.915613 | orchestrator | 2025-06-02 17:29:49.917017 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:29:49.918132 | orchestrator | Monday 02 June 2025 17:29:49 +0000 (0:00:00.186) 0:00:30.416 *********** 2025-06-02 17:29:50.113889 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:29:50.114893 | orchestrator | 2025-06-02 17:29:50.115717 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:29:50.116388 | orchestrator | Monday 02 June 2025 17:29:50 +0000 (0:00:00.199) 0:00:30.615 *********** 2025-06-02 17:29:50.671774 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_33e5408c-eac4-45cf-8284-ea43471071f8) 2025-06-02 17:29:50.672420 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_33e5408c-eac4-45cf-8284-ea43471071f8) 2025-06-02 17:29:50.673347 | orchestrator | 2025-06-02 17:29:50.674771 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:29:50.675677 | orchestrator | Monday 02 June 2025 17:29:50 +0000 (0:00:00.555) 0:00:31.171 *********** 2025-06-02 17:29:51.343045 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_cc6b7f8a-a299-449d-8912-3815da19ff1f) 2025-06-02 17:29:51.343606 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_cc6b7f8a-a299-449d-8912-3815da19ff1f) 2025-06-02 17:29:51.344495 | orchestrator | 2025-06-02 17:29:51.345221 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:29:51.345910 | orchestrator | Monday 02 June 2025 17:29:51 +0000 (0:00:00.671) 0:00:31.843 *********** 2025-06-02 17:29:51.727668 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_fb369b5e-a271-4fa4-9f85-1311171daecb) 2025-06-02 17:29:51.728396 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_fb369b5e-a271-4fa4-9f85-1311171daecb) 2025-06-02 17:29:51.728886 | orchestrator | 2025-06-02 17:29:51.729514 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:29:51.730419 | orchestrator | Monday 02 June 2025 17:29:51 +0000 (0:00:00.386) 0:00:32.230 *********** 2025-06-02 17:29:52.124594 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_6f5db02e-386c-41b9-ae07-b7cce6e0964a) 2025-06-02 17:29:52.124804 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_6f5db02e-386c-41b9-ae07-b7cce6e0964a) 2025-06-02 17:29:52.125701 | orchestrator | 2025-06-02 17:29:52.126849 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:29:52.129543 | orchestrator | Monday 02 June 2025 17:29:52 +0000 (0:00:00.397) 0:00:32.627 *********** 2025-06-02 17:29:52.402136 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-02 17:29:52.402892 | orchestrator | 2025-06-02 17:29:52.403905 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:29:52.404452 | orchestrator | Monday 02 June 2025 17:29:52 +0000 (0:00:00.276) 0:00:32.903 *********** 2025-06-02 17:29:52.749889 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-06-02 17:29:52.750241 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-06-02 17:29:52.750267 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-06-02 17:29:52.750377 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-06-02 17:29:52.752290 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-06-02 17:29:52.753803 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-06-02 17:29:52.754578 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-06-02 17:29:52.757282 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-06-02 17:29:52.758114 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-06-02 17:29:52.758980 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-06-02 17:29:52.759724 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-06-02 17:29:52.760606 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-06-02 17:29:52.761662 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-06-02 17:29:52.762644 | orchestrator | 2025-06-02 17:29:52.763765 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:29:52.763995 | orchestrator | Monday 02 June 2025 17:29:52 +0000 (0:00:00.348) 0:00:33.252 *********** 2025-06-02 17:29:52.962564 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:29:52.963270 | orchestrator | 2025-06-02 17:29:52.964481 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:29:52.965539 | orchestrator | Monday 02 June 2025 17:29:52 +0000 (0:00:00.210) 0:00:33.463 *********** 2025-06-02 17:29:53.147577 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:29:53.147886 | orchestrator | 2025-06-02 17:29:53.149343 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:29:53.151010 | orchestrator | Monday 02 June 2025 17:29:53 +0000 (0:00:00.184) 0:00:33.648 *********** 2025-06-02 17:29:53.344564 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:29:53.345327 | orchestrator | 2025-06-02 17:29:53.347181 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:29:53.347206 | orchestrator | Monday 02 June 2025 17:29:53 +0000 (0:00:00.199) 0:00:33.847 *********** 2025-06-02 17:29:53.556212 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:29:53.556742 | orchestrator | 2025-06-02 17:29:53.557726 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:29:53.558167 | orchestrator | Monday 02 June 2025 17:29:53 +0000 (0:00:00.211) 0:00:34.059 *********** 2025-06-02 17:29:53.756003 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:29:53.756454 | orchestrator | 2025-06-02 17:29:53.757970 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:29:53.758203 | orchestrator | Monday 02 June 2025 17:29:53 +0000 (0:00:00.198) 0:00:34.257 *********** 2025-06-02 17:29:54.318397 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:29:54.318504 | orchestrator | 2025-06-02 17:29:54.318583 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:29:54.319116 | orchestrator | Monday 02 June 2025 17:29:54 +0000 (0:00:00.558) 0:00:34.816 *********** 2025-06-02 17:29:54.514807 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:29:54.515973 | orchestrator | 2025-06-02 17:29:54.516372 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:29:54.518882 | orchestrator | Monday 02 June 2025 17:29:54 +0000 (0:00:00.201) 0:00:35.017 *********** 2025-06-02 17:29:54.721321 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:29:54.726575 | orchestrator | 2025-06-02 17:29:54.727357 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:29:54.728370 | orchestrator | Monday 02 June 2025 17:29:54 +0000 (0:00:00.203) 0:00:35.220 *********** 2025-06-02 17:29:55.484122 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-06-02 17:29:55.484897 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-06-02 17:29:55.486606 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-06-02 17:29:55.489030 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-06-02 17:29:55.489784 | orchestrator | 2025-06-02 17:29:55.491146 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:29:55.492458 | orchestrator | Monday 02 June 2025 17:29:55 +0000 (0:00:00.764) 0:00:35.985 *********** 2025-06-02 17:29:55.677700 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:29:55.678148 | orchestrator | 2025-06-02 17:29:55.679570 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:29:55.680551 | orchestrator | Monday 02 June 2025 17:29:55 +0000 (0:00:00.194) 0:00:36.179 *********** 2025-06-02 17:29:55.877295 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:29:55.878921 | orchestrator | 2025-06-02 17:29:55.879814 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:29:55.881039 | orchestrator | Monday 02 June 2025 17:29:55 +0000 (0:00:00.199) 0:00:36.379 *********** 2025-06-02 17:29:56.062293 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:29:56.063276 | orchestrator | 2025-06-02 17:29:56.063309 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:29:56.064375 | orchestrator | Monday 02 June 2025 17:29:56 +0000 (0:00:00.184) 0:00:36.563 *********** 2025-06-02 17:29:56.250351 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:29:56.251837 | orchestrator | 2025-06-02 17:29:56.253597 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-02 17:29:56.253921 | orchestrator | Monday 02 June 2025 17:29:56 +0000 (0:00:00.187) 0:00:36.751 *********** 2025-06-02 17:29:56.416530 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-06-02 17:29:56.416867 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-06-02 17:29:56.418821 | orchestrator | 2025-06-02 17:29:56.418878 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-02 17:29:56.419499 | orchestrator | Monday 02 June 2025 17:29:56 +0000 (0:00:00.167) 0:00:36.919 *********** 2025-06-02 17:29:56.557146 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:29:56.557287 | orchestrator | 2025-06-02 17:29:56.557394 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-02 17:29:56.558208 | orchestrator | Monday 02 June 2025 17:29:56 +0000 (0:00:00.134) 0:00:37.053 *********** 2025-06-02 17:29:56.694900 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:29:56.697207 | orchestrator | 2025-06-02 17:29:56.697382 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-02 17:29:56.698313 | orchestrator | Monday 02 June 2025 17:29:56 +0000 (0:00:00.143) 0:00:37.197 *********** 2025-06-02 17:29:56.820296 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:29:56.820488 | orchestrator | 2025-06-02 17:29:56.821237 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-02 17:29:56.821656 | orchestrator | Monday 02 June 2025 17:29:56 +0000 (0:00:00.125) 0:00:37.323 *********** 2025-06-02 17:29:57.087362 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:29:57.089200 | orchestrator | 2025-06-02 17:29:57.090330 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-02 17:29:57.091547 | orchestrator | Monday 02 June 2025 17:29:57 +0000 (0:00:00.267) 0:00:37.590 *********** 2025-06-02 17:29:57.266688 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '33d58ee2-4c10-58b1-ba9c-becc4d68c01c'}}) 2025-06-02 17:29:57.266794 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b'}}) 2025-06-02 17:29:57.267327 | orchestrator | 2025-06-02 17:29:57.267820 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-02 17:29:57.268232 | orchestrator | Monday 02 June 2025 17:29:57 +0000 (0:00:00.176) 0:00:37.767 *********** 2025-06-02 17:29:57.431920 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '33d58ee2-4c10-58b1-ba9c-becc4d68c01c'}})  2025-06-02 17:29:57.432761 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b'}})  2025-06-02 17:29:57.433384 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:29:57.434271 | orchestrator | 2025-06-02 17:29:57.435476 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-02 17:29:57.435848 | orchestrator | Monday 02 June 2025 17:29:57 +0000 (0:00:00.167) 0:00:37.934 *********** 2025-06-02 17:29:57.612208 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '33d58ee2-4c10-58b1-ba9c-becc4d68c01c'}})  2025-06-02 17:29:57.613208 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b'}})  2025-06-02 17:29:57.613817 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:29:57.614776 | orchestrator | 2025-06-02 17:29:57.615545 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-02 17:29:57.616291 | orchestrator | Monday 02 June 2025 17:29:57 +0000 (0:00:00.178) 0:00:38.113 *********** 2025-06-02 17:29:57.764601 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '33d58ee2-4c10-58b1-ba9c-becc4d68c01c'}})  2025-06-02 17:29:57.764942 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b'}})  2025-06-02 17:29:57.765494 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:29:57.766525 | orchestrator | 2025-06-02 17:29:57.767137 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-02 17:29:57.767850 | orchestrator | Monday 02 June 2025 17:29:57 +0000 (0:00:00.151) 0:00:38.264 *********** 2025-06-02 17:29:57.914551 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:29:57.918262 | orchestrator | 2025-06-02 17:29:57.919239 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-02 17:29:57.919582 | orchestrator | Monday 02 June 2025 17:29:57 +0000 (0:00:00.151) 0:00:38.416 *********** 2025-06-02 17:29:58.037402 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:29:58.037480 | orchestrator | 2025-06-02 17:29:58.038478 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-02 17:29:58.038825 | orchestrator | Monday 02 June 2025 17:29:58 +0000 (0:00:00.122) 0:00:38.539 *********** 2025-06-02 17:29:58.272188 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:29:58.272713 | orchestrator | 2025-06-02 17:29:58.273337 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-02 17:29:58.273898 | orchestrator | Monday 02 June 2025 17:29:58 +0000 (0:00:00.234) 0:00:38.773 *********** 2025-06-02 17:29:58.416004 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:29:58.416998 | orchestrator | 2025-06-02 17:29:58.418079 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-02 17:29:58.419212 | orchestrator | Monday 02 June 2025 17:29:58 +0000 (0:00:00.144) 0:00:38.918 *********** 2025-06-02 17:29:58.543721 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:29:58.544365 | orchestrator | 2025-06-02 17:29:58.545716 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-02 17:29:58.546700 | orchestrator | Monday 02 June 2025 17:29:58 +0000 (0:00:00.127) 0:00:39.046 *********** 2025-06-02 17:29:58.686660 | orchestrator | ok: [testbed-node-5] => { 2025-06-02 17:29:58.687935 | orchestrator |  "ceph_osd_devices": { 2025-06-02 17:29:58.688987 | orchestrator |  "sdb": { 2025-06-02 17:29:58.690145 | orchestrator |  "osd_lvm_uuid": "33d58ee2-4c10-58b1-ba9c-becc4d68c01c" 2025-06-02 17:29:58.690733 | orchestrator |  }, 2025-06-02 17:29:58.692181 | orchestrator |  "sdc": { 2025-06-02 17:29:58.692852 | orchestrator |  "osd_lvm_uuid": "a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b" 2025-06-02 17:29:58.693740 | orchestrator |  } 2025-06-02 17:29:58.694466 | orchestrator |  } 2025-06-02 17:29:58.695167 | orchestrator | } 2025-06-02 17:29:58.695780 | orchestrator | 2025-06-02 17:29:58.696361 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-02 17:29:58.696994 | orchestrator | Monday 02 June 2025 17:29:58 +0000 (0:00:00.142) 0:00:39.188 *********** 2025-06-02 17:29:58.789002 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:29:58.789190 | orchestrator | 2025-06-02 17:29:58.789891 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-02 17:29:58.791092 | orchestrator | Monday 02 June 2025 17:29:58 +0000 (0:00:00.101) 0:00:39.290 *********** 2025-06-02 17:29:59.062579 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:29:59.062845 | orchestrator | 2025-06-02 17:29:59.064128 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-02 17:29:59.064576 | orchestrator | Monday 02 June 2025 17:29:59 +0000 (0:00:00.273) 0:00:39.564 *********** 2025-06-02 17:29:59.188287 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:29:59.188460 | orchestrator | 2025-06-02 17:29:59.189280 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-02 17:29:59.190077 | orchestrator | Monday 02 June 2025 17:29:59 +0000 (0:00:00.127) 0:00:39.691 *********** 2025-06-02 17:29:59.370895 | orchestrator | changed: [testbed-node-5] => { 2025-06-02 17:29:59.371133 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-02 17:29:59.371391 | orchestrator |  "ceph_osd_devices": { 2025-06-02 17:29:59.371518 | orchestrator |  "sdb": { 2025-06-02 17:29:59.373185 | orchestrator |  "osd_lvm_uuid": "33d58ee2-4c10-58b1-ba9c-becc4d68c01c" 2025-06-02 17:29:59.373349 | orchestrator |  }, 2025-06-02 17:29:59.373764 | orchestrator |  "sdc": { 2025-06-02 17:29:59.374185 | orchestrator |  "osd_lvm_uuid": "a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b" 2025-06-02 17:29:59.374516 | orchestrator |  } 2025-06-02 17:29:59.374898 | orchestrator |  }, 2025-06-02 17:29:59.375202 | orchestrator |  "lvm_volumes": [ 2025-06-02 17:29:59.375760 | orchestrator |  { 2025-06-02 17:29:59.375850 | orchestrator |  "data": "osd-block-33d58ee2-4c10-58b1-ba9c-becc4d68c01c", 2025-06-02 17:29:59.376216 | orchestrator |  "data_vg": "ceph-33d58ee2-4c10-58b1-ba9c-becc4d68c01c" 2025-06-02 17:29:59.376710 | orchestrator |  }, 2025-06-02 17:29:59.377259 | orchestrator |  { 2025-06-02 17:29:59.377802 | orchestrator |  "data": "osd-block-a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b", 2025-06-02 17:29:59.378119 | orchestrator |  "data_vg": "ceph-a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b" 2025-06-02 17:29:59.378595 | orchestrator |  } 2025-06-02 17:29:59.378758 | orchestrator |  ] 2025-06-02 17:29:59.379212 | orchestrator |  } 2025-06-02 17:29:59.379604 | orchestrator | } 2025-06-02 17:29:59.380221 | orchestrator | 2025-06-02 17:29:59.380645 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-02 17:29:59.380900 | orchestrator | Monday 02 June 2025 17:29:59 +0000 (0:00:00.179) 0:00:39.871 *********** 2025-06-02 17:30:00.282566 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-02 17:30:00.283560 | orchestrator | 2025-06-02 17:30:00.284578 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:30:00.284850 | orchestrator | 2025-06-02 17:30:00 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 17:30:00.284953 | orchestrator | 2025-06-02 17:30:00 | INFO  | Please wait and do not abort execution. 2025-06-02 17:30:00.287896 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-02 17:30:00.288590 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-02 17:30:00.289251 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-02 17:30:00.290498 | orchestrator | 2025-06-02 17:30:00.290541 | orchestrator | 2025-06-02 17:30:00.291270 | orchestrator | 2025-06-02 17:30:00.291364 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:30:00.291927 | orchestrator | Monday 02 June 2025 17:30:00 +0000 (0:00:00.913) 0:00:40.784 *********** 2025-06-02 17:30:00.292706 | orchestrator | =============================================================================== 2025-06-02 17:30:00.292803 | orchestrator | Write configuration file ------------------------------------------------ 3.82s 2025-06-02 17:30:00.293284 | orchestrator | Add known partitions to the list of available block devices ------------- 1.18s 2025-06-02 17:30:00.293893 | orchestrator | Add known partitions to the list of available block devices ------------- 1.12s 2025-06-02 17:30:00.294149 | orchestrator | Add known links to the list of available block devices ------------------ 1.10s 2025-06-02 17:30:00.294660 | orchestrator | Get initial list of available block devices ----------------------------- 0.99s 2025-06-02 17:30:00.295081 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.93s 2025-06-02 17:30:00.295559 | orchestrator | Add known links to the list of available block devices ------------------ 0.85s 2025-06-02 17:30:00.295901 | orchestrator | Add known partitions to the list of available block devices ------------- 0.76s 2025-06-02 17:30:00.296563 | orchestrator | Add known links to the list of available block devices ------------------ 0.68s 2025-06-02 17:30:00.296687 | orchestrator | Add known links to the list of available block devices ------------------ 0.67s 2025-06-02 17:30:00.297161 | orchestrator | Add known links to the list of available block devices ------------------ 0.67s 2025-06-02 17:30:00.298261 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.65s 2025-06-02 17:30:00.298313 | orchestrator | Add known partitions to the list of available block devices ------------- 0.63s 2025-06-02 17:30:00.298326 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.62s 2025-06-02 17:30:00.299105 | orchestrator | Add known links to the list of available block devices ------------------ 0.61s 2025-06-02 17:30:00.299147 | orchestrator | Print configuration data ------------------------------------------------ 0.57s 2025-06-02 17:30:00.299437 | orchestrator | Add known partitions to the list of available block devices ------------- 0.56s 2025-06-02 17:30:00.299886 | orchestrator | Add known links to the list of available block devices ------------------ 0.56s 2025-06-02 17:30:00.300214 | orchestrator | Set WAL devices config data --------------------------------------------- 0.55s 2025-06-02 17:30:00.300590 | orchestrator | Add known partitions to the list of available block devices ------------- 0.52s 2025-06-02 17:30:12.525748 | orchestrator | Registering Redlock._acquired_script 2025-06-02 17:30:12.525833 | orchestrator | Registering Redlock._extend_script 2025-06-02 17:30:12.525841 | orchestrator | Registering Redlock._release_script 2025-06-02 17:30:12.604550 | orchestrator | 2025-06-02 17:30:12 | INFO  | Task 14179d56-9352-47b0-9ce8-69ae05d8f04d (sync inventory) is running in background. Output coming soon. 2025-06-02 17:30:59.609872 | orchestrator | 2025-06-02 17:30:41 | INFO  | Starting group_vars file reorganization 2025-06-02 17:30:59.609962 | orchestrator | 2025-06-02 17:30:41 | INFO  | Moved 0 file(s) to their respective directories 2025-06-02 17:30:59.609971 | orchestrator | 2025-06-02 17:30:41 | INFO  | Group_vars file reorganization completed 2025-06-02 17:30:59.609978 | orchestrator | 2025-06-02 17:30:43 | INFO  | Starting variable preparation from inventory 2025-06-02 17:30:59.609985 | orchestrator | 2025-06-02 17:30:44 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-06-02 17:30:59.609992 | orchestrator | 2025-06-02 17:30:44 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-06-02 17:30:59.610064 | orchestrator | 2025-06-02 17:30:44 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-06-02 17:30:59.610073 | orchestrator | 2025-06-02 17:30:44 | INFO  | 3 file(s) written, 6 host(s) processed 2025-06-02 17:30:59.610080 | orchestrator | 2025-06-02 17:30:44 | INFO  | Variable preparation completed: 2025-06-02 17:30:59.610087 | orchestrator | 2025-06-02 17:30:45 | INFO  | Starting inventory overwrite handling 2025-06-02 17:30:59.610093 | orchestrator | 2025-06-02 17:30:45 | INFO  | Handling group overwrites in 99-overwrite 2025-06-02 17:30:59.610100 | orchestrator | 2025-06-02 17:30:45 | INFO  | Removing group frr:children from 60-generic 2025-06-02 17:30:59.610106 | orchestrator | 2025-06-02 17:30:45 | INFO  | Removing group storage:children from 50-kolla 2025-06-02 17:30:59.610112 | orchestrator | 2025-06-02 17:30:45 | INFO  | Removing group netbird:children from 50-infrastruture 2025-06-02 17:30:59.610126 | orchestrator | 2025-06-02 17:30:45 | INFO  | Removing group ceph-rgw from 50-ceph 2025-06-02 17:30:59.610132 | orchestrator | 2025-06-02 17:30:45 | INFO  | Removing group ceph-mds from 50-ceph 2025-06-02 17:30:59.610139 | orchestrator | 2025-06-02 17:30:45 | INFO  | Handling group overwrites in 20-roles 2025-06-02 17:30:59.610186 | orchestrator | 2025-06-02 17:30:45 | INFO  | Removing group k3s_node from 50-infrastruture 2025-06-02 17:30:59.610193 | orchestrator | 2025-06-02 17:30:45 | INFO  | Removed 6 group(s) in total 2025-06-02 17:30:59.610200 | orchestrator | 2025-06-02 17:30:45 | INFO  | Inventory overwrite handling completed 2025-06-02 17:30:59.610206 | orchestrator | 2025-06-02 17:30:47 | INFO  | Starting merge of inventory files 2025-06-02 17:30:59.610212 | orchestrator | 2025-06-02 17:30:47 | INFO  | Inventory files merged successfully 2025-06-02 17:30:59.610218 | orchestrator | 2025-06-02 17:30:51 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-06-02 17:30:59.610225 | orchestrator | 2025-06-02 17:30:58 | INFO  | Successfully wrote ClusterShell configuration 2025-06-02 17:31:01.688442 | orchestrator | 2025-06-02 17:31:01 | INFO  | Task 08f0a244-8cd9-4efb-a662-86c67eefbc17 (ceph-create-lvm-devices) was prepared for execution. 2025-06-02 17:31:01.688536 | orchestrator | 2025-06-02 17:31:01 | INFO  | It takes a moment until task 08f0a244-8cd9-4efb-a662-86c67eefbc17 (ceph-create-lvm-devices) has been started and output is visible here. 2025-06-02 17:31:05.971995 | orchestrator | 2025-06-02 17:31:05.972888 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-02 17:31:05.975778 | orchestrator | 2025-06-02 17:31:05.976981 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-02 17:31:05.978553 | orchestrator | Monday 02 June 2025 17:31:05 +0000 (0:00:00.330) 0:00:00.330 *********** 2025-06-02 17:31:06.249017 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 17:31:06.249729 | orchestrator | 2025-06-02 17:31:06.251656 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-02 17:31:06.252959 | orchestrator | Monday 02 June 2025 17:31:06 +0000 (0:00:00.279) 0:00:00.610 *********** 2025-06-02 17:31:06.484030 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:31:06.485345 | orchestrator | 2025-06-02 17:31:06.486260 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:06.487918 | orchestrator | Monday 02 June 2025 17:31:06 +0000 (0:00:00.233) 0:00:00.844 *********** 2025-06-02 17:31:06.883132 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-06-02 17:31:06.883942 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-06-02 17:31:06.885363 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-06-02 17:31:06.887413 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-06-02 17:31:06.887839 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-06-02 17:31:06.889166 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-06-02 17:31:06.890210 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-06-02 17:31:06.890918 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-06-02 17:31:06.891243 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-06-02 17:31:06.891720 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-06-02 17:31:06.892660 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-06-02 17:31:06.892902 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-06-02 17:31:06.893301 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-06-02 17:31:06.893901 | orchestrator | 2025-06-02 17:31:06.894273 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:06.894758 | orchestrator | Monday 02 June 2025 17:31:06 +0000 (0:00:00.399) 0:00:01.243 *********** 2025-06-02 17:31:07.365278 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:07.365725 | orchestrator | 2025-06-02 17:31:07.366603 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:07.367276 | orchestrator | Monday 02 June 2025 17:31:07 +0000 (0:00:00.482) 0:00:01.726 *********** 2025-06-02 17:31:07.566902 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:07.567237 | orchestrator | 2025-06-02 17:31:07.567977 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:07.568797 | orchestrator | Monday 02 June 2025 17:31:07 +0000 (0:00:00.202) 0:00:01.929 *********** 2025-06-02 17:31:07.780167 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:07.780753 | orchestrator | 2025-06-02 17:31:07.783553 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:07.783585 | orchestrator | Monday 02 June 2025 17:31:07 +0000 (0:00:00.213) 0:00:02.142 *********** 2025-06-02 17:31:07.980715 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:07.980822 | orchestrator | 2025-06-02 17:31:07.981336 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:07.982170 | orchestrator | Monday 02 June 2025 17:31:07 +0000 (0:00:00.200) 0:00:02.343 *********** 2025-06-02 17:31:08.196433 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:08.197786 | orchestrator | 2025-06-02 17:31:08.198310 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:08.199280 | orchestrator | Monday 02 June 2025 17:31:08 +0000 (0:00:00.214) 0:00:02.557 *********** 2025-06-02 17:31:08.407955 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:08.408056 | orchestrator | 2025-06-02 17:31:08.410485 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:08.410513 | orchestrator | Monday 02 June 2025 17:31:08 +0000 (0:00:00.212) 0:00:02.769 *********** 2025-06-02 17:31:08.618866 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:08.620256 | orchestrator | 2025-06-02 17:31:08.622105 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:08.622486 | orchestrator | Monday 02 June 2025 17:31:08 +0000 (0:00:00.211) 0:00:02.981 *********** 2025-06-02 17:31:08.851190 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:08.852489 | orchestrator | 2025-06-02 17:31:08.854229 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:08.854256 | orchestrator | Monday 02 June 2025 17:31:08 +0000 (0:00:00.232) 0:00:03.213 *********** 2025-06-02 17:31:09.286129 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_dc2aa846-a38d-43a1-9fee-c1088582d602) 2025-06-02 17:31:09.286841 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_dc2aa846-a38d-43a1-9fee-c1088582d602) 2025-06-02 17:31:09.287188 | orchestrator | 2025-06-02 17:31:09.288149 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:09.288876 | orchestrator | Monday 02 June 2025 17:31:09 +0000 (0:00:00.432) 0:00:03.645 *********** 2025-06-02 17:31:09.706397 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f15aa92f-a864-46a7-a446-d151182076d1) 2025-06-02 17:31:09.706500 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f15aa92f-a864-46a7-a446-d151182076d1) 2025-06-02 17:31:09.707276 | orchestrator | 2025-06-02 17:31:09.708224 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:09.709929 | orchestrator | Monday 02 June 2025 17:31:09 +0000 (0:00:00.423) 0:00:04.069 *********** 2025-06-02 17:31:10.321927 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_abb01d95-8fd4-488e-8b6c-7cb2a7271361) 2025-06-02 17:31:10.322305 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_abb01d95-8fd4-488e-8b6c-7cb2a7271361) 2025-06-02 17:31:10.323131 | orchestrator | 2025-06-02 17:31:10.324459 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:10.325693 | orchestrator | Monday 02 June 2025 17:31:10 +0000 (0:00:00.613) 0:00:04.682 *********** 2025-06-02 17:31:10.964903 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5d913f80-ed99-4f7f-af77-a272e71d6767) 2025-06-02 17:31:10.965736 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5d913f80-ed99-4f7f-af77-a272e71d6767) 2025-06-02 17:31:10.966658 | orchestrator | 2025-06-02 17:31:10.967632 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:10.968076 | orchestrator | Monday 02 June 2025 17:31:10 +0000 (0:00:00.644) 0:00:05.327 *********** 2025-06-02 17:31:11.710433 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-02 17:31:11.710638 | orchestrator | 2025-06-02 17:31:11.711348 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:31:11.712609 | orchestrator | Monday 02 June 2025 17:31:11 +0000 (0:00:00.744) 0:00:06.071 *********** 2025-06-02 17:31:12.135892 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-06-02 17:31:12.137000 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-06-02 17:31:12.137533 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-06-02 17:31:12.138677 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-06-02 17:31:12.139588 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-06-02 17:31:12.139881 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-06-02 17:31:12.140719 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-06-02 17:31:12.142156 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-06-02 17:31:12.143054 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-06-02 17:31:12.143770 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-06-02 17:31:12.144335 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-06-02 17:31:12.144718 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-06-02 17:31:12.145109 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-06-02 17:31:12.146454 | orchestrator | 2025-06-02 17:31:12.146592 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:31:12.146868 | orchestrator | Monday 02 June 2025 17:31:12 +0000 (0:00:00.425) 0:00:06.496 *********** 2025-06-02 17:31:12.368477 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:12.369079 | orchestrator | 2025-06-02 17:31:12.369211 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:31:12.369901 | orchestrator | Monday 02 June 2025 17:31:12 +0000 (0:00:00.233) 0:00:06.730 *********** 2025-06-02 17:31:12.572203 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:12.572308 | orchestrator | 2025-06-02 17:31:12.572323 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:31:12.572336 | orchestrator | Monday 02 June 2025 17:31:12 +0000 (0:00:00.203) 0:00:06.933 *********** 2025-06-02 17:31:12.797259 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:12.797648 | orchestrator | 2025-06-02 17:31:12.798678 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:31:12.799534 | orchestrator | Monday 02 June 2025 17:31:12 +0000 (0:00:00.226) 0:00:07.159 *********** 2025-06-02 17:31:13.003239 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:13.003359 | orchestrator | 2025-06-02 17:31:13.003499 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:31:13.004474 | orchestrator | Monday 02 June 2025 17:31:12 +0000 (0:00:00.204) 0:00:07.364 *********** 2025-06-02 17:31:13.229810 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:13.231361 | orchestrator | 2025-06-02 17:31:13.231396 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:31:13.231959 | orchestrator | Monday 02 June 2025 17:31:13 +0000 (0:00:00.226) 0:00:07.590 *********** 2025-06-02 17:31:13.441174 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:13.441409 | orchestrator | 2025-06-02 17:31:13.443700 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:31:13.443734 | orchestrator | Monday 02 June 2025 17:31:13 +0000 (0:00:00.211) 0:00:07.802 *********** 2025-06-02 17:31:13.639283 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:13.639746 | orchestrator | 2025-06-02 17:31:13.640800 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:31:13.641299 | orchestrator | Monday 02 June 2025 17:31:13 +0000 (0:00:00.198) 0:00:08.000 *********** 2025-06-02 17:31:13.834441 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:13.834791 | orchestrator | 2025-06-02 17:31:13.836213 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:31:13.837927 | orchestrator | Monday 02 June 2025 17:31:13 +0000 (0:00:00.194) 0:00:08.195 *********** 2025-06-02 17:31:14.911018 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-06-02 17:31:14.912622 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-06-02 17:31:14.913736 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-06-02 17:31:14.914620 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-06-02 17:31:14.915488 | orchestrator | 2025-06-02 17:31:14.916818 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:31:14.917599 | orchestrator | Monday 02 June 2025 17:31:14 +0000 (0:00:01.076) 0:00:09.271 *********** 2025-06-02 17:31:15.137701 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:15.138641 | orchestrator | 2025-06-02 17:31:15.139452 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:31:15.140574 | orchestrator | Monday 02 June 2025 17:31:15 +0000 (0:00:00.228) 0:00:09.499 *********** 2025-06-02 17:31:15.349226 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:15.350123 | orchestrator | 2025-06-02 17:31:15.351027 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:31:15.353709 | orchestrator | Monday 02 June 2025 17:31:15 +0000 (0:00:00.211) 0:00:09.711 *********** 2025-06-02 17:31:15.550972 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:15.551372 | orchestrator | 2025-06-02 17:31:15.552455 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:31:15.552806 | orchestrator | Monday 02 June 2025 17:31:15 +0000 (0:00:00.201) 0:00:09.913 *********** 2025-06-02 17:31:15.751194 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:15.751875 | orchestrator | 2025-06-02 17:31:15.753004 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-02 17:31:15.754858 | orchestrator | Monday 02 June 2025 17:31:15 +0000 (0:00:00.199) 0:00:10.112 *********** 2025-06-02 17:31:15.896854 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:15.897453 | orchestrator | 2025-06-02 17:31:15.898269 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-02 17:31:15.898819 | orchestrator | Monday 02 June 2025 17:31:15 +0000 (0:00:00.145) 0:00:10.258 *********** 2025-06-02 17:31:16.113630 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '94958c5d-ab49-5ebf-a5cb-ef67fe0a9704'}}) 2025-06-02 17:31:16.113736 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '42dde184-17ae-50b7-8921-f17969f5efd9'}}) 2025-06-02 17:31:16.114174 | orchestrator | 2025-06-02 17:31:16.114415 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-02 17:31:16.115020 | orchestrator | Monday 02 June 2025 17:31:16 +0000 (0:00:00.218) 0:00:10.476 *********** 2025-06-02 17:31:18.093210 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-94958c5d-ab49-5ebf-a5cb-ef67fe0a9704', 'data_vg': 'ceph-94958c5d-ab49-5ebf-a5cb-ef67fe0a9704'}) 2025-06-02 17:31:18.093884 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-42dde184-17ae-50b7-8921-f17969f5efd9', 'data_vg': 'ceph-42dde184-17ae-50b7-8921-f17969f5efd9'}) 2025-06-02 17:31:18.094827 | orchestrator | 2025-06-02 17:31:18.096373 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-02 17:31:18.097690 | orchestrator | Monday 02 June 2025 17:31:18 +0000 (0:00:01.976) 0:00:12.453 *********** 2025-06-02 17:31:18.256727 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-94958c5d-ab49-5ebf-a5cb-ef67fe0a9704', 'data_vg': 'ceph-94958c5d-ab49-5ebf-a5cb-ef67fe0a9704'})  2025-06-02 17:31:18.257916 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-42dde184-17ae-50b7-8921-f17969f5efd9', 'data_vg': 'ceph-42dde184-17ae-50b7-8921-f17969f5efd9'})  2025-06-02 17:31:18.259108 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:18.260935 | orchestrator | 2025-06-02 17:31:18.261446 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-02 17:31:18.262134 | orchestrator | Monday 02 June 2025 17:31:18 +0000 (0:00:00.164) 0:00:12.618 *********** 2025-06-02 17:31:19.712474 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-94958c5d-ab49-5ebf-a5cb-ef67fe0a9704', 'data_vg': 'ceph-94958c5d-ab49-5ebf-a5cb-ef67fe0a9704'}) 2025-06-02 17:31:19.715950 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-42dde184-17ae-50b7-8921-f17969f5efd9', 'data_vg': 'ceph-42dde184-17ae-50b7-8921-f17969f5efd9'}) 2025-06-02 17:31:19.715989 | orchestrator | 2025-06-02 17:31:19.716005 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-02 17:31:19.716018 | orchestrator | Monday 02 June 2025 17:31:19 +0000 (0:00:01.454) 0:00:14.072 *********** 2025-06-02 17:31:19.870876 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-94958c5d-ab49-5ebf-a5cb-ef67fe0a9704', 'data_vg': 'ceph-94958c5d-ab49-5ebf-a5cb-ef67fe0a9704'})  2025-06-02 17:31:19.873829 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-42dde184-17ae-50b7-8921-f17969f5efd9', 'data_vg': 'ceph-42dde184-17ae-50b7-8921-f17969f5efd9'})  2025-06-02 17:31:19.874839 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:19.874868 | orchestrator | 2025-06-02 17:31:19.875872 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-02 17:31:19.876606 | orchestrator | Monday 02 June 2025 17:31:19 +0000 (0:00:00.161) 0:00:14.233 *********** 2025-06-02 17:31:20.004422 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:20.005280 | orchestrator | 2025-06-02 17:31:20.006684 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-02 17:31:20.008245 | orchestrator | Monday 02 June 2025 17:31:19 +0000 (0:00:00.131) 0:00:14.365 *********** 2025-06-02 17:31:20.393107 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-94958c5d-ab49-5ebf-a5cb-ef67fe0a9704', 'data_vg': 'ceph-94958c5d-ab49-5ebf-a5cb-ef67fe0a9704'})  2025-06-02 17:31:20.394624 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-42dde184-17ae-50b7-8921-f17969f5efd9', 'data_vg': 'ceph-42dde184-17ae-50b7-8921-f17969f5efd9'})  2025-06-02 17:31:20.396132 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:20.396341 | orchestrator | 2025-06-02 17:31:20.397443 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-02 17:31:20.397726 | orchestrator | Monday 02 June 2025 17:31:20 +0000 (0:00:00.387) 0:00:14.753 *********** 2025-06-02 17:31:20.535292 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:20.535397 | orchestrator | 2025-06-02 17:31:20.535411 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-02 17:31:20.535597 | orchestrator | Monday 02 June 2025 17:31:20 +0000 (0:00:00.144) 0:00:14.897 *********** 2025-06-02 17:31:20.690275 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-94958c5d-ab49-5ebf-a5cb-ef67fe0a9704', 'data_vg': 'ceph-94958c5d-ab49-5ebf-a5cb-ef67fe0a9704'})  2025-06-02 17:31:20.691370 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-42dde184-17ae-50b7-8921-f17969f5efd9', 'data_vg': 'ceph-42dde184-17ae-50b7-8921-f17969f5efd9'})  2025-06-02 17:31:20.692586 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:20.693348 | orchestrator | 2025-06-02 17:31:20.695089 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-02 17:31:20.695823 | orchestrator | Monday 02 June 2025 17:31:20 +0000 (0:00:00.154) 0:00:15.052 *********** 2025-06-02 17:31:20.832693 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:20.833777 | orchestrator | 2025-06-02 17:31:20.835288 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-02 17:31:20.836354 | orchestrator | Monday 02 June 2025 17:31:20 +0000 (0:00:00.143) 0:00:15.195 *********** 2025-06-02 17:31:20.989983 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-94958c5d-ab49-5ebf-a5cb-ef67fe0a9704', 'data_vg': 'ceph-94958c5d-ab49-5ebf-a5cb-ef67fe0a9704'})  2025-06-02 17:31:20.991504 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-42dde184-17ae-50b7-8921-f17969f5efd9', 'data_vg': 'ceph-42dde184-17ae-50b7-8921-f17969f5efd9'})  2025-06-02 17:31:20.993057 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:20.994285 | orchestrator | 2025-06-02 17:31:20.995449 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-02 17:31:20.996601 | orchestrator | Monday 02 June 2025 17:31:20 +0000 (0:00:00.156) 0:00:15.352 *********** 2025-06-02 17:31:21.147820 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:31:21.148542 | orchestrator | 2025-06-02 17:31:21.149230 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-02 17:31:21.150400 | orchestrator | Monday 02 June 2025 17:31:21 +0000 (0:00:00.157) 0:00:15.510 *********** 2025-06-02 17:31:21.315135 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-94958c5d-ab49-5ebf-a5cb-ef67fe0a9704', 'data_vg': 'ceph-94958c5d-ab49-5ebf-a5cb-ef67fe0a9704'})  2025-06-02 17:31:21.315247 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-42dde184-17ae-50b7-8921-f17969f5efd9', 'data_vg': 'ceph-42dde184-17ae-50b7-8921-f17969f5efd9'})  2025-06-02 17:31:21.315341 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:21.315745 | orchestrator | 2025-06-02 17:31:21.316118 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-02 17:31:21.317170 | orchestrator | Monday 02 June 2025 17:31:21 +0000 (0:00:00.166) 0:00:15.677 *********** 2025-06-02 17:31:21.476502 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-94958c5d-ab49-5ebf-a5cb-ef67fe0a9704', 'data_vg': 'ceph-94958c5d-ab49-5ebf-a5cb-ef67fe0a9704'})  2025-06-02 17:31:21.477084 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-42dde184-17ae-50b7-8921-f17969f5efd9', 'data_vg': 'ceph-42dde184-17ae-50b7-8921-f17969f5efd9'})  2025-06-02 17:31:21.478537 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:21.479375 | orchestrator | 2025-06-02 17:31:21.480408 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-02 17:31:21.480916 | orchestrator | Monday 02 June 2025 17:31:21 +0000 (0:00:00.161) 0:00:15.838 *********** 2025-06-02 17:31:21.636439 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-94958c5d-ab49-5ebf-a5cb-ef67fe0a9704', 'data_vg': 'ceph-94958c5d-ab49-5ebf-a5cb-ef67fe0a9704'})  2025-06-02 17:31:21.638719 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-42dde184-17ae-50b7-8921-f17969f5efd9', 'data_vg': 'ceph-42dde184-17ae-50b7-8921-f17969f5efd9'})  2025-06-02 17:31:21.638752 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:21.639741 | orchestrator | 2025-06-02 17:31:21.640531 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-02 17:31:21.641339 | orchestrator | Monday 02 June 2025 17:31:21 +0000 (0:00:00.158) 0:00:15.997 *********** 2025-06-02 17:31:21.804369 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:21.804830 | orchestrator | 2025-06-02 17:31:21.805907 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-02 17:31:21.806648 | orchestrator | Monday 02 June 2025 17:31:21 +0000 (0:00:00.168) 0:00:16.165 *********** 2025-06-02 17:31:21.959537 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:21.959914 | orchestrator | 2025-06-02 17:31:21.960668 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-02 17:31:21.961475 | orchestrator | Monday 02 June 2025 17:31:21 +0000 (0:00:00.156) 0:00:16.322 *********** 2025-06-02 17:31:22.103419 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:22.103629 | orchestrator | 2025-06-02 17:31:22.104318 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-02 17:31:22.105255 | orchestrator | Monday 02 June 2025 17:31:22 +0000 (0:00:00.143) 0:00:16.465 *********** 2025-06-02 17:31:22.468221 | orchestrator | ok: [testbed-node-3] => { 2025-06-02 17:31:22.469106 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-02 17:31:22.470132 | orchestrator | } 2025-06-02 17:31:22.471202 | orchestrator | 2025-06-02 17:31:22.472166 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-02 17:31:22.472761 | orchestrator | Monday 02 June 2025 17:31:22 +0000 (0:00:00.363) 0:00:16.829 *********** 2025-06-02 17:31:22.615087 | orchestrator | ok: [testbed-node-3] => { 2025-06-02 17:31:22.615193 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-02 17:31:22.615749 | orchestrator | } 2025-06-02 17:31:22.616213 | orchestrator | 2025-06-02 17:31:22.616688 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-02 17:31:22.617088 | orchestrator | Monday 02 June 2025 17:31:22 +0000 (0:00:00.148) 0:00:16.977 *********** 2025-06-02 17:31:22.767919 | orchestrator | ok: [testbed-node-3] => { 2025-06-02 17:31:22.768023 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-02 17:31:22.768107 | orchestrator | } 2025-06-02 17:31:22.770376 | orchestrator | 2025-06-02 17:31:22.770777 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-02 17:31:22.771102 | orchestrator | Monday 02 June 2025 17:31:22 +0000 (0:00:00.152) 0:00:17.130 *********** 2025-06-02 17:31:23.444748 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:31:23.445838 | orchestrator | 2025-06-02 17:31:23.447192 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-02 17:31:23.448342 | orchestrator | Monday 02 June 2025 17:31:23 +0000 (0:00:00.675) 0:00:17.805 *********** 2025-06-02 17:31:23.950693 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:31:23.951623 | orchestrator | 2025-06-02 17:31:23.953367 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-02 17:31:23.954945 | orchestrator | Monday 02 June 2025 17:31:23 +0000 (0:00:00.505) 0:00:18.311 *********** 2025-06-02 17:31:24.451824 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:31:24.452140 | orchestrator | 2025-06-02 17:31:24.454125 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-02 17:31:24.455506 | orchestrator | Monday 02 June 2025 17:31:24 +0000 (0:00:00.501) 0:00:18.812 *********** 2025-06-02 17:31:24.608950 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:31:24.612737 | orchestrator | 2025-06-02 17:31:24.612796 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-02 17:31:24.613223 | orchestrator | Monday 02 June 2025 17:31:24 +0000 (0:00:00.158) 0:00:18.970 *********** 2025-06-02 17:31:24.735055 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:24.739752 | orchestrator | 2025-06-02 17:31:24.740437 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-02 17:31:24.742781 | orchestrator | Monday 02 June 2025 17:31:24 +0000 (0:00:00.124) 0:00:19.094 *********** 2025-06-02 17:31:24.843462 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:24.844231 | orchestrator | 2025-06-02 17:31:24.844887 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-02 17:31:24.845951 | orchestrator | Monday 02 June 2025 17:31:24 +0000 (0:00:00.110) 0:00:19.205 *********** 2025-06-02 17:31:24.992766 | orchestrator | ok: [testbed-node-3] => { 2025-06-02 17:31:24.992913 | orchestrator |  "vgs_report": { 2025-06-02 17:31:24.994276 | orchestrator |  "vg": [] 2025-06-02 17:31:24.995186 | orchestrator |  } 2025-06-02 17:31:24.995909 | orchestrator | } 2025-06-02 17:31:24.996793 | orchestrator | 2025-06-02 17:31:24.997681 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-02 17:31:24.998205 | orchestrator | Monday 02 June 2025 17:31:24 +0000 (0:00:00.148) 0:00:19.353 *********** 2025-06-02 17:31:25.142281 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:25.142753 | orchestrator | 2025-06-02 17:31:25.143058 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-02 17:31:25.143760 | orchestrator | Monday 02 June 2025 17:31:25 +0000 (0:00:00.150) 0:00:19.504 *********** 2025-06-02 17:31:25.285969 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:25.288425 | orchestrator | 2025-06-02 17:31:25.288469 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-02 17:31:25.289822 | orchestrator | Monday 02 June 2025 17:31:25 +0000 (0:00:00.142) 0:00:19.646 *********** 2025-06-02 17:31:25.620465 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:25.620655 | orchestrator | 2025-06-02 17:31:25.621413 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-02 17:31:25.621886 | orchestrator | Monday 02 June 2025 17:31:25 +0000 (0:00:00.335) 0:00:19.982 *********** 2025-06-02 17:31:25.757973 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:25.758253 | orchestrator | 2025-06-02 17:31:25.759384 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-02 17:31:25.760139 | orchestrator | Monday 02 June 2025 17:31:25 +0000 (0:00:00.136) 0:00:20.118 *********** 2025-06-02 17:31:25.901337 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:25.901512 | orchestrator | 2025-06-02 17:31:25.903479 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-02 17:31:25.904480 | orchestrator | Monday 02 June 2025 17:31:25 +0000 (0:00:00.143) 0:00:20.262 *********** 2025-06-02 17:31:26.045083 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:26.045190 | orchestrator | 2025-06-02 17:31:26.045372 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-02 17:31:26.046137 | orchestrator | Monday 02 June 2025 17:31:26 +0000 (0:00:00.144) 0:00:20.406 *********** 2025-06-02 17:31:26.176227 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:26.176949 | orchestrator | 2025-06-02 17:31:26.177775 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-02 17:31:26.178771 | orchestrator | Monday 02 June 2025 17:31:26 +0000 (0:00:00.131) 0:00:20.538 *********** 2025-06-02 17:31:26.316383 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:26.317068 | orchestrator | 2025-06-02 17:31:26.318651 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-02 17:31:26.319370 | orchestrator | Monday 02 June 2025 17:31:26 +0000 (0:00:00.138) 0:00:20.676 *********** 2025-06-02 17:31:26.458917 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:26.459770 | orchestrator | 2025-06-02 17:31:26.460989 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-02 17:31:26.461733 | orchestrator | Monday 02 June 2025 17:31:26 +0000 (0:00:00.143) 0:00:20.819 *********** 2025-06-02 17:31:26.597319 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:26.597957 | orchestrator | 2025-06-02 17:31:26.599591 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-02 17:31:26.600708 | orchestrator | Monday 02 June 2025 17:31:26 +0000 (0:00:00.139) 0:00:20.959 *********** 2025-06-02 17:31:26.739692 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:26.744527 | orchestrator | 2025-06-02 17:31:26.744597 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-02 17:31:26.746076 | orchestrator | Monday 02 June 2025 17:31:26 +0000 (0:00:00.142) 0:00:21.102 *********** 2025-06-02 17:31:26.873624 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:26.875770 | orchestrator | 2025-06-02 17:31:26.876336 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-02 17:31:26.877716 | orchestrator | Monday 02 June 2025 17:31:26 +0000 (0:00:00.133) 0:00:21.236 *********** 2025-06-02 17:31:27.024538 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:27.025413 | orchestrator | 2025-06-02 17:31:27.027440 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-02 17:31:27.027873 | orchestrator | Monday 02 June 2025 17:31:27 +0000 (0:00:00.150) 0:00:21.386 *********** 2025-06-02 17:31:27.173276 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:27.174401 | orchestrator | 2025-06-02 17:31:27.176078 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-02 17:31:27.177272 | orchestrator | Monday 02 June 2025 17:31:27 +0000 (0:00:00.148) 0:00:21.535 *********** 2025-06-02 17:31:27.331417 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-94958c5d-ab49-5ebf-a5cb-ef67fe0a9704', 'data_vg': 'ceph-94958c5d-ab49-5ebf-a5cb-ef67fe0a9704'})  2025-06-02 17:31:27.332446 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-42dde184-17ae-50b7-8921-f17969f5efd9', 'data_vg': 'ceph-42dde184-17ae-50b7-8921-f17969f5efd9'})  2025-06-02 17:31:27.333632 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:27.334702 | orchestrator | 2025-06-02 17:31:27.335861 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-02 17:31:27.336741 | orchestrator | Monday 02 June 2025 17:31:27 +0000 (0:00:00.156) 0:00:21.691 *********** 2025-06-02 17:31:27.711274 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-94958c5d-ab49-5ebf-a5cb-ef67fe0a9704', 'data_vg': 'ceph-94958c5d-ab49-5ebf-a5cb-ef67fe0a9704'})  2025-06-02 17:31:27.711520 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-42dde184-17ae-50b7-8921-f17969f5efd9', 'data_vg': 'ceph-42dde184-17ae-50b7-8921-f17969f5efd9'})  2025-06-02 17:31:27.711617 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:27.712363 | orchestrator | 2025-06-02 17:31:27.713912 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-02 17:31:27.714801 | orchestrator | Monday 02 June 2025 17:31:27 +0000 (0:00:00.381) 0:00:22.073 *********** 2025-06-02 17:31:27.888034 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-94958c5d-ab49-5ebf-a5cb-ef67fe0a9704', 'data_vg': 'ceph-94958c5d-ab49-5ebf-a5cb-ef67fe0a9704'})  2025-06-02 17:31:27.888244 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-42dde184-17ae-50b7-8921-f17969f5efd9', 'data_vg': 'ceph-42dde184-17ae-50b7-8921-f17969f5efd9'})  2025-06-02 17:31:27.889150 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:27.890165 | orchestrator | 2025-06-02 17:31:27.890675 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-02 17:31:27.891891 | orchestrator | Monday 02 June 2025 17:31:27 +0000 (0:00:00.176) 0:00:22.250 *********** 2025-06-02 17:31:28.028194 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-94958c5d-ab49-5ebf-a5cb-ef67fe0a9704', 'data_vg': 'ceph-94958c5d-ab49-5ebf-a5cb-ef67fe0a9704'})  2025-06-02 17:31:28.029082 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-42dde184-17ae-50b7-8921-f17969f5efd9', 'data_vg': 'ceph-42dde184-17ae-50b7-8921-f17969f5efd9'})  2025-06-02 17:31:28.030362 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:28.030909 | orchestrator | 2025-06-02 17:31:28.031631 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-02 17:31:28.032564 | orchestrator | Monday 02 June 2025 17:31:28 +0000 (0:00:00.140) 0:00:22.390 *********** 2025-06-02 17:31:28.194185 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-94958c5d-ab49-5ebf-a5cb-ef67fe0a9704', 'data_vg': 'ceph-94958c5d-ab49-5ebf-a5cb-ef67fe0a9704'})  2025-06-02 17:31:28.194749 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-42dde184-17ae-50b7-8921-f17969f5efd9', 'data_vg': 'ceph-42dde184-17ae-50b7-8921-f17969f5efd9'})  2025-06-02 17:31:28.196450 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:28.197519 | orchestrator | 2025-06-02 17:31:28.198355 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-02 17:31:28.199305 | orchestrator | Monday 02 June 2025 17:31:28 +0000 (0:00:00.164) 0:00:22.555 *********** 2025-06-02 17:31:28.351336 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-94958c5d-ab49-5ebf-a5cb-ef67fe0a9704', 'data_vg': 'ceph-94958c5d-ab49-5ebf-a5cb-ef67fe0a9704'})  2025-06-02 17:31:28.353176 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-42dde184-17ae-50b7-8921-f17969f5efd9', 'data_vg': 'ceph-42dde184-17ae-50b7-8921-f17969f5efd9'})  2025-06-02 17:31:28.354254 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:28.355426 | orchestrator | 2025-06-02 17:31:28.356537 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-02 17:31:28.358013 | orchestrator | Monday 02 June 2025 17:31:28 +0000 (0:00:00.157) 0:00:22.712 *********** 2025-06-02 17:31:28.511854 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-94958c5d-ab49-5ebf-a5cb-ef67fe0a9704', 'data_vg': 'ceph-94958c5d-ab49-5ebf-a5cb-ef67fe0a9704'})  2025-06-02 17:31:28.512837 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-42dde184-17ae-50b7-8921-f17969f5efd9', 'data_vg': 'ceph-42dde184-17ae-50b7-8921-f17969f5efd9'})  2025-06-02 17:31:28.512946 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:28.513777 | orchestrator | 2025-06-02 17:31:28.514951 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-02 17:31:28.515926 | orchestrator | Monday 02 June 2025 17:31:28 +0000 (0:00:00.161) 0:00:22.873 *********** 2025-06-02 17:31:28.672181 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-94958c5d-ab49-5ebf-a5cb-ef67fe0a9704', 'data_vg': 'ceph-94958c5d-ab49-5ebf-a5cb-ef67fe0a9704'})  2025-06-02 17:31:28.672866 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-42dde184-17ae-50b7-8921-f17969f5efd9', 'data_vg': 'ceph-42dde184-17ae-50b7-8921-f17969f5efd9'})  2025-06-02 17:31:28.675063 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:28.676580 | orchestrator | 2025-06-02 17:31:28.677955 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-02 17:31:28.679919 | orchestrator | Monday 02 June 2025 17:31:28 +0000 (0:00:00.160) 0:00:23.033 *********** 2025-06-02 17:31:29.172663 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:31:29.172892 | orchestrator | 2025-06-02 17:31:29.173409 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-02 17:31:29.174253 | orchestrator | Monday 02 June 2025 17:31:29 +0000 (0:00:00.501) 0:00:23.535 *********** 2025-06-02 17:31:29.670471 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:31:29.670792 | orchestrator | 2025-06-02 17:31:29.671275 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-02 17:31:29.673141 | orchestrator | Monday 02 June 2025 17:31:29 +0000 (0:00:00.496) 0:00:24.031 *********** 2025-06-02 17:31:29.818116 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:31:29.818279 | orchestrator | 2025-06-02 17:31:29.818375 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-02 17:31:29.818887 | orchestrator | Monday 02 June 2025 17:31:29 +0000 (0:00:00.149) 0:00:24.180 *********** 2025-06-02 17:31:30.002300 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-42dde184-17ae-50b7-8921-f17969f5efd9', 'vg_name': 'ceph-42dde184-17ae-50b7-8921-f17969f5efd9'}) 2025-06-02 17:31:30.002366 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-94958c5d-ab49-5ebf-a5cb-ef67fe0a9704', 'vg_name': 'ceph-94958c5d-ab49-5ebf-a5cb-ef67fe0a9704'}) 2025-06-02 17:31:30.003127 | orchestrator | 2025-06-02 17:31:30.003725 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-02 17:31:30.004314 | orchestrator | Monday 02 June 2025 17:31:29 +0000 (0:00:00.183) 0:00:24.364 *********** 2025-06-02 17:31:30.159486 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-94958c5d-ab49-5ebf-a5cb-ef67fe0a9704', 'data_vg': 'ceph-94958c5d-ab49-5ebf-a5cb-ef67fe0a9704'})  2025-06-02 17:31:30.160909 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-42dde184-17ae-50b7-8921-f17969f5efd9', 'data_vg': 'ceph-42dde184-17ae-50b7-8921-f17969f5efd9'})  2025-06-02 17:31:30.164346 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:30.165045 | orchestrator | 2025-06-02 17:31:30.165937 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-02 17:31:30.166719 | orchestrator | Monday 02 June 2025 17:31:30 +0000 (0:00:00.157) 0:00:24.522 *********** 2025-06-02 17:31:30.536999 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-94958c5d-ab49-5ebf-a5cb-ef67fe0a9704', 'data_vg': 'ceph-94958c5d-ab49-5ebf-a5cb-ef67fe0a9704'})  2025-06-02 17:31:30.537166 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-42dde184-17ae-50b7-8921-f17969f5efd9', 'data_vg': 'ceph-42dde184-17ae-50b7-8921-f17969f5efd9'})  2025-06-02 17:31:30.538429 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:30.539505 | orchestrator | 2025-06-02 17:31:30.540729 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-02 17:31:30.541692 | orchestrator | Monday 02 June 2025 17:31:30 +0000 (0:00:00.375) 0:00:24.897 *********** 2025-06-02 17:31:30.702103 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-94958c5d-ab49-5ebf-a5cb-ef67fe0a9704', 'data_vg': 'ceph-94958c5d-ab49-5ebf-a5cb-ef67fe0a9704'})  2025-06-02 17:31:30.702195 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-42dde184-17ae-50b7-8921-f17969f5efd9', 'data_vg': 'ceph-42dde184-17ae-50b7-8921-f17969f5efd9'})  2025-06-02 17:31:30.702268 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:30.703789 | orchestrator | 2025-06-02 17:31:30.704234 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-02 17:31:30.704681 | orchestrator | Monday 02 June 2025 17:31:30 +0000 (0:00:00.166) 0:00:25.064 *********** 2025-06-02 17:31:31.015887 | orchestrator | ok: [testbed-node-3] => { 2025-06-02 17:31:31.015992 | orchestrator |  "lvm_report": { 2025-06-02 17:31:31.016007 | orchestrator |  "lv": [ 2025-06-02 17:31:31.016019 | orchestrator |  { 2025-06-02 17:31:31.016425 | orchestrator |  "lv_name": "osd-block-42dde184-17ae-50b7-8921-f17969f5efd9", 2025-06-02 17:31:31.017208 | orchestrator |  "vg_name": "ceph-42dde184-17ae-50b7-8921-f17969f5efd9" 2025-06-02 17:31:31.017658 | orchestrator |  }, 2025-06-02 17:31:31.018120 | orchestrator |  { 2025-06-02 17:31:31.018380 | orchestrator |  "lv_name": "osd-block-94958c5d-ab49-5ebf-a5cb-ef67fe0a9704", 2025-06-02 17:31:31.019335 | orchestrator |  "vg_name": "ceph-94958c5d-ab49-5ebf-a5cb-ef67fe0a9704" 2025-06-02 17:31:31.019710 | orchestrator |  } 2025-06-02 17:31:31.020247 | orchestrator |  ], 2025-06-02 17:31:31.020529 | orchestrator |  "pv": [ 2025-06-02 17:31:31.020972 | orchestrator |  { 2025-06-02 17:31:31.022004 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-02 17:31:31.022138 | orchestrator |  "vg_name": "ceph-94958c5d-ab49-5ebf-a5cb-ef67fe0a9704" 2025-06-02 17:31:31.022436 | orchestrator |  }, 2025-06-02 17:31:31.023633 | orchestrator |  { 2025-06-02 17:31:31.023725 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-02 17:31:31.026278 | orchestrator |  "vg_name": "ceph-42dde184-17ae-50b7-8921-f17969f5efd9" 2025-06-02 17:31:31.026306 | orchestrator |  } 2025-06-02 17:31:31.026318 | orchestrator |  ] 2025-06-02 17:31:31.026329 | orchestrator |  } 2025-06-02 17:31:31.026341 | orchestrator | } 2025-06-02 17:31:31.026353 | orchestrator | 2025-06-02 17:31:31.026365 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-02 17:31:31.026536 | orchestrator | 2025-06-02 17:31:31.026698 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-02 17:31:31.027044 | orchestrator | Monday 02 June 2025 17:31:31 +0000 (0:00:00.314) 0:00:25.378 *********** 2025-06-02 17:31:31.264101 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-02 17:31:31.264374 | orchestrator | 2025-06-02 17:31:31.264967 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-02 17:31:31.265393 | orchestrator | Monday 02 June 2025 17:31:31 +0000 (0:00:00.247) 0:00:25.626 *********** 2025-06-02 17:31:31.492061 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:31:31.492228 | orchestrator | 2025-06-02 17:31:31.492594 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:31.492870 | orchestrator | Monday 02 June 2025 17:31:31 +0000 (0:00:00.227) 0:00:25.853 *********** 2025-06-02 17:31:31.924428 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-06-02 17:31:31.924534 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-06-02 17:31:31.924850 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-06-02 17:31:31.925583 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-06-02 17:31:31.926175 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-06-02 17:31:31.926741 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-06-02 17:31:31.928490 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-06-02 17:31:31.928638 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-06-02 17:31:31.928659 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-06-02 17:31:31.928671 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-06-02 17:31:31.928761 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-06-02 17:31:31.929281 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-06-02 17:31:31.929918 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-06-02 17:31:31.930535 | orchestrator | 2025-06-02 17:31:31.930819 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:31.931064 | orchestrator | Monday 02 June 2025 17:31:31 +0000 (0:00:00.433) 0:00:26.287 *********** 2025-06-02 17:31:32.140014 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:32.140315 | orchestrator | 2025-06-02 17:31:32.140929 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:32.141270 | orchestrator | Monday 02 June 2025 17:31:32 +0000 (0:00:00.214) 0:00:26.502 *********** 2025-06-02 17:31:32.347158 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:32.347247 | orchestrator | 2025-06-02 17:31:32.347292 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:32.347367 | orchestrator | Monday 02 June 2025 17:31:32 +0000 (0:00:00.208) 0:00:26.710 *********** 2025-06-02 17:31:32.539902 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:32.542318 | orchestrator | 2025-06-02 17:31:32.542358 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:32.542485 | orchestrator | Monday 02 June 2025 17:31:32 +0000 (0:00:00.189) 0:00:26.900 *********** 2025-06-02 17:31:33.190353 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:33.190512 | orchestrator | 2025-06-02 17:31:33.191099 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:33.191983 | orchestrator | Monday 02 June 2025 17:31:33 +0000 (0:00:00.651) 0:00:27.552 *********** 2025-06-02 17:31:33.391306 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:33.391744 | orchestrator | 2025-06-02 17:31:33.391780 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:33.393123 | orchestrator | Monday 02 June 2025 17:31:33 +0000 (0:00:00.201) 0:00:27.753 *********** 2025-06-02 17:31:33.602271 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:33.603212 | orchestrator | 2025-06-02 17:31:33.603251 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:33.604015 | orchestrator | Monday 02 June 2025 17:31:33 +0000 (0:00:00.207) 0:00:27.961 *********** 2025-06-02 17:31:33.800349 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:33.800746 | orchestrator | 2025-06-02 17:31:33.801245 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:33.802099 | orchestrator | Monday 02 June 2025 17:31:33 +0000 (0:00:00.202) 0:00:28.163 *********** 2025-06-02 17:31:34.034240 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:34.034843 | orchestrator | 2025-06-02 17:31:34.035365 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:34.039233 | orchestrator | Monday 02 June 2025 17:31:34 +0000 (0:00:00.231) 0:00:28.395 *********** 2025-06-02 17:31:34.455492 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_7ce206cb-87d4-44fa-8b19-ffddc5f2b300) 2025-06-02 17:31:34.456092 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_7ce206cb-87d4-44fa-8b19-ffddc5f2b300) 2025-06-02 17:31:34.456559 | orchestrator | 2025-06-02 17:31:34.457166 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:34.458681 | orchestrator | Monday 02 June 2025 17:31:34 +0000 (0:00:00.422) 0:00:28.817 *********** 2025-06-02 17:31:34.890131 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_37a5ef51-3790-4474-9294-da6668d88e33) 2025-06-02 17:31:34.890378 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_37a5ef51-3790-4474-9294-da6668d88e33) 2025-06-02 17:31:34.891323 | orchestrator | 2025-06-02 17:31:34.892220 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:34.892795 | orchestrator | Monday 02 June 2025 17:31:34 +0000 (0:00:00.435) 0:00:29.252 *********** 2025-06-02 17:31:35.301975 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_8b34934e-11eb-4c36-8207-511a42fe0f38) 2025-06-02 17:31:35.302889 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_8b34934e-11eb-4c36-8207-511a42fe0f38) 2025-06-02 17:31:35.304027 | orchestrator | 2025-06-02 17:31:35.304859 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:35.305520 | orchestrator | Monday 02 June 2025 17:31:35 +0000 (0:00:00.411) 0:00:29.664 *********** 2025-06-02 17:31:35.761082 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d22e3547-dc50-4b67-b48e-5886da7d5148) 2025-06-02 17:31:35.763832 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d22e3547-dc50-4b67-b48e-5886da7d5148) 2025-06-02 17:31:35.765529 | orchestrator | 2025-06-02 17:31:35.766198 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:35.767021 | orchestrator | Monday 02 June 2025 17:31:35 +0000 (0:00:00.456) 0:00:30.120 *********** 2025-06-02 17:31:36.089043 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-02 17:31:36.090350 | orchestrator | 2025-06-02 17:31:36.091594 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:31:36.092400 | orchestrator | Monday 02 June 2025 17:31:36 +0000 (0:00:00.330) 0:00:30.451 *********** 2025-06-02 17:31:36.735572 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-06-02 17:31:36.736784 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-06-02 17:31:36.738340 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-06-02 17:31:36.739431 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-06-02 17:31:36.740618 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-06-02 17:31:36.741966 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-06-02 17:31:36.743279 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-06-02 17:31:36.744209 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-06-02 17:31:36.744889 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-06-02 17:31:36.745654 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-06-02 17:31:36.746581 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-06-02 17:31:36.747181 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-06-02 17:31:36.747744 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-06-02 17:31:36.748429 | orchestrator | 2025-06-02 17:31:36.748848 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:31:36.749413 | orchestrator | Monday 02 June 2025 17:31:36 +0000 (0:00:00.644) 0:00:31.096 *********** 2025-06-02 17:31:36.941144 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:36.941607 | orchestrator | 2025-06-02 17:31:36.942201 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:31:36.943187 | orchestrator | Monday 02 June 2025 17:31:36 +0000 (0:00:00.207) 0:00:31.303 *********** 2025-06-02 17:31:37.150347 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:37.150449 | orchestrator | 2025-06-02 17:31:37.150465 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:31:37.150477 | orchestrator | Monday 02 June 2025 17:31:37 +0000 (0:00:00.207) 0:00:31.511 *********** 2025-06-02 17:31:37.355142 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:37.355335 | orchestrator | 2025-06-02 17:31:37.355619 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:31:37.356113 | orchestrator | Monday 02 June 2025 17:31:37 +0000 (0:00:00.206) 0:00:31.717 *********** 2025-06-02 17:31:37.561657 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:37.561862 | orchestrator | 2025-06-02 17:31:37.561885 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:31:37.561898 | orchestrator | Monday 02 June 2025 17:31:37 +0000 (0:00:00.204) 0:00:31.922 *********** 2025-06-02 17:31:37.769505 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:37.770175 | orchestrator | 2025-06-02 17:31:37.771887 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:31:37.772856 | orchestrator | Monday 02 June 2025 17:31:37 +0000 (0:00:00.209) 0:00:32.131 *********** 2025-06-02 17:31:37.980885 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:37.982453 | orchestrator | 2025-06-02 17:31:37.982580 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:31:37.983182 | orchestrator | Monday 02 June 2025 17:31:37 +0000 (0:00:00.211) 0:00:32.343 *********** 2025-06-02 17:31:38.189908 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:38.190012 | orchestrator | 2025-06-02 17:31:38.190971 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:31:38.191643 | orchestrator | Monday 02 June 2025 17:31:38 +0000 (0:00:00.209) 0:00:32.552 *********** 2025-06-02 17:31:38.409625 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:38.409725 | orchestrator | 2025-06-02 17:31:38.410170 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:31:38.410648 | orchestrator | Monday 02 June 2025 17:31:38 +0000 (0:00:00.219) 0:00:32.772 *********** 2025-06-02 17:31:39.282716 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-06-02 17:31:39.284383 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-06-02 17:31:39.285289 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-06-02 17:31:39.285962 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-06-02 17:31:39.286761 | orchestrator | 2025-06-02 17:31:39.287605 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:31:39.288213 | orchestrator | Monday 02 June 2025 17:31:39 +0000 (0:00:00.871) 0:00:33.643 *********** 2025-06-02 17:31:39.485835 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:39.486132 | orchestrator | 2025-06-02 17:31:39.486518 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:31:39.488132 | orchestrator | Monday 02 June 2025 17:31:39 +0000 (0:00:00.204) 0:00:33.848 *********** 2025-06-02 17:31:39.707450 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:39.708171 | orchestrator | 2025-06-02 17:31:39.709898 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:31:39.710154 | orchestrator | Monday 02 June 2025 17:31:39 +0000 (0:00:00.219) 0:00:34.067 *********** 2025-06-02 17:31:40.318252 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:40.318852 | orchestrator | 2025-06-02 17:31:40.320698 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:31:40.321833 | orchestrator | Monday 02 June 2025 17:31:40 +0000 (0:00:00.613) 0:00:34.680 *********** 2025-06-02 17:31:40.530873 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:40.530956 | orchestrator | 2025-06-02 17:31:40.531011 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-02 17:31:40.531022 | orchestrator | Monday 02 June 2025 17:31:40 +0000 (0:00:00.213) 0:00:34.894 *********** 2025-06-02 17:31:40.671427 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:40.672686 | orchestrator | 2025-06-02 17:31:40.673697 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-02 17:31:40.674503 | orchestrator | Monday 02 June 2025 17:31:40 +0000 (0:00:00.139) 0:00:35.034 *********** 2025-06-02 17:31:40.858115 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'de836c00-0412-5e15-aa8a-abef9bebfb26'}}) 2025-06-02 17:31:40.859070 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c404b240-9cf0-5c0e-97ba-c570a8ba4cd9'}}) 2025-06-02 17:31:40.859671 | orchestrator | 2025-06-02 17:31:40.860309 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-02 17:31:40.860728 | orchestrator | Monday 02 June 2025 17:31:40 +0000 (0:00:00.184) 0:00:35.218 *********** 2025-06-02 17:31:42.792917 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-de836c00-0412-5e15-aa8a-abef9bebfb26', 'data_vg': 'ceph-de836c00-0412-5e15-aa8a-abef9bebfb26'}) 2025-06-02 17:31:42.793026 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-c404b240-9cf0-5c0e-97ba-c570a8ba4cd9', 'data_vg': 'ceph-c404b240-9cf0-5c0e-97ba-c570a8ba4cd9'}) 2025-06-02 17:31:42.793041 | orchestrator | 2025-06-02 17:31:42.793455 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-02 17:31:42.795064 | orchestrator | Monday 02 June 2025 17:31:42 +0000 (0:00:01.932) 0:00:37.151 *********** 2025-06-02 17:31:42.918975 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de836c00-0412-5e15-aa8a-abef9bebfb26', 'data_vg': 'ceph-de836c00-0412-5e15-aa8a-abef9bebfb26'})  2025-06-02 17:31:42.920252 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c404b240-9cf0-5c0e-97ba-c570a8ba4cd9', 'data_vg': 'ceph-c404b240-9cf0-5c0e-97ba-c570a8ba4cd9'})  2025-06-02 17:31:42.921017 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:42.921798 | orchestrator | 2025-06-02 17:31:42.922317 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-02 17:31:42.923482 | orchestrator | Monday 02 June 2025 17:31:42 +0000 (0:00:00.130) 0:00:37.282 *********** 2025-06-02 17:31:44.203643 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-de836c00-0412-5e15-aa8a-abef9bebfb26', 'data_vg': 'ceph-de836c00-0412-5e15-aa8a-abef9bebfb26'}) 2025-06-02 17:31:44.204752 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-c404b240-9cf0-5c0e-97ba-c570a8ba4cd9', 'data_vg': 'ceph-c404b240-9cf0-5c0e-97ba-c570a8ba4cd9'}) 2025-06-02 17:31:44.205770 | orchestrator | 2025-06-02 17:31:44.206725 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-02 17:31:44.207414 | orchestrator | Monday 02 June 2025 17:31:44 +0000 (0:00:01.282) 0:00:38.565 *********** 2025-06-02 17:31:44.331908 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de836c00-0412-5e15-aa8a-abef9bebfb26', 'data_vg': 'ceph-de836c00-0412-5e15-aa8a-abef9bebfb26'})  2025-06-02 17:31:44.332156 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c404b240-9cf0-5c0e-97ba-c570a8ba4cd9', 'data_vg': 'ceph-c404b240-9cf0-5c0e-97ba-c570a8ba4cd9'})  2025-06-02 17:31:44.332884 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:44.333345 | orchestrator | 2025-06-02 17:31:44.334226 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-02 17:31:44.334453 | orchestrator | Monday 02 June 2025 17:31:44 +0000 (0:00:00.129) 0:00:38.695 *********** 2025-06-02 17:31:44.457908 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:44.458690 | orchestrator | 2025-06-02 17:31:44.462099 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-02 17:31:44.462131 | orchestrator | Monday 02 June 2025 17:31:44 +0000 (0:00:00.126) 0:00:38.821 *********** 2025-06-02 17:31:44.596106 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de836c00-0412-5e15-aa8a-abef9bebfb26', 'data_vg': 'ceph-de836c00-0412-5e15-aa8a-abef9bebfb26'})  2025-06-02 17:31:44.596276 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c404b240-9cf0-5c0e-97ba-c570a8ba4cd9', 'data_vg': 'ceph-c404b240-9cf0-5c0e-97ba-c570a8ba4cd9'})  2025-06-02 17:31:44.596926 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:44.597926 | orchestrator | 2025-06-02 17:31:44.598684 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-02 17:31:44.599296 | orchestrator | Monday 02 June 2025 17:31:44 +0000 (0:00:00.133) 0:00:38.954 *********** 2025-06-02 17:31:44.773820 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:44.774069 | orchestrator | 2025-06-02 17:31:44.777289 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-02 17:31:44.777326 | orchestrator | Monday 02 June 2025 17:31:44 +0000 (0:00:00.179) 0:00:39.134 *********** 2025-06-02 17:31:44.920940 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de836c00-0412-5e15-aa8a-abef9bebfb26', 'data_vg': 'ceph-de836c00-0412-5e15-aa8a-abef9bebfb26'})  2025-06-02 17:31:44.921181 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c404b240-9cf0-5c0e-97ba-c570a8ba4cd9', 'data_vg': 'ceph-c404b240-9cf0-5c0e-97ba-c570a8ba4cd9'})  2025-06-02 17:31:44.921701 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:44.922811 | orchestrator | 2025-06-02 17:31:44.923078 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-02 17:31:44.923395 | orchestrator | Monday 02 June 2025 17:31:44 +0000 (0:00:00.148) 0:00:39.282 *********** 2025-06-02 17:31:45.216288 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:45.217140 | orchestrator | 2025-06-02 17:31:45.218641 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-02 17:31:45.218674 | orchestrator | Monday 02 June 2025 17:31:45 +0000 (0:00:00.296) 0:00:39.579 *********** 2025-06-02 17:31:45.357621 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de836c00-0412-5e15-aa8a-abef9bebfb26', 'data_vg': 'ceph-de836c00-0412-5e15-aa8a-abef9bebfb26'})  2025-06-02 17:31:45.357925 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c404b240-9cf0-5c0e-97ba-c570a8ba4cd9', 'data_vg': 'ceph-c404b240-9cf0-5c0e-97ba-c570a8ba4cd9'})  2025-06-02 17:31:45.359696 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:45.359746 | orchestrator | 2025-06-02 17:31:45.360764 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-02 17:31:45.361803 | orchestrator | Monday 02 June 2025 17:31:45 +0000 (0:00:00.140) 0:00:39.720 *********** 2025-06-02 17:31:45.485349 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:31:45.485712 | orchestrator | 2025-06-02 17:31:45.486610 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-02 17:31:45.487584 | orchestrator | Monday 02 June 2025 17:31:45 +0000 (0:00:00.127) 0:00:39.847 *********** 2025-06-02 17:31:45.618413 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de836c00-0412-5e15-aa8a-abef9bebfb26', 'data_vg': 'ceph-de836c00-0412-5e15-aa8a-abef9bebfb26'})  2025-06-02 17:31:45.618576 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c404b240-9cf0-5c0e-97ba-c570a8ba4cd9', 'data_vg': 'ceph-c404b240-9cf0-5c0e-97ba-c570a8ba4cd9'})  2025-06-02 17:31:45.618658 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:45.619209 | orchestrator | 2025-06-02 17:31:45.619556 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-02 17:31:45.620611 | orchestrator | Monday 02 June 2025 17:31:45 +0000 (0:00:00.133) 0:00:39.981 *********** 2025-06-02 17:31:45.762507 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de836c00-0412-5e15-aa8a-abef9bebfb26', 'data_vg': 'ceph-de836c00-0412-5e15-aa8a-abef9bebfb26'})  2025-06-02 17:31:45.763147 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c404b240-9cf0-5c0e-97ba-c570a8ba4cd9', 'data_vg': 'ceph-c404b240-9cf0-5c0e-97ba-c570a8ba4cd9'})  2025-06-02 17:31:45.764232 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:45.765342 | orchestrator | 2025-06-02 17:31:45.766081 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-02 17:31:45.766986 | orchestrator | Monday 02 June 2025 17:31:45 +0000 (0:00:00.144) 0:00:40.125 *********** 2025-06-02 17:31:45.905761 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de836c00-0412-5e15-aa8a-abef9bebfb26', 'data_vg': 'ceph-de836c00-0412-5e15-aa8a-abef9bebfb26'})  2025-06-02 17:31:45.906337 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c404b240-9cf0-5c0e-97ba-c570a8ba4cd9', 'data_vg': 'ceph-c404b240-9cf0-5c0e-97ba-c570a8ba4cd9'})  2025-06-02 17:31:45.907001 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:45.908736 | orchestrator | 2025-06-02 17:31:45.908778 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-02 17:31:45.909372 | orchestrator | Monday 02 June 2025 17:31:45 +0000 (0:00:00.142) 0:00:40.267 *********** 2025-06-02 17:31:46.071050 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:46.071155 | orchestrator | 2025-06-02 17:31:46.071170 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-02 17:31:46.071666 | orchestrator | Monday 02 June 2025 17:31:46 +0000 (0:00:00.163) 0:00:40.431 *********** 2025-06-02 17:31:46.193129 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:46.193528 | orchestrator | 2025-06-02 17:31:46.194436 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-02 17:31:46.196136 | orchestrator | Monday 02 June 2025 17:31:46 +0000 (0:00:00.125) 0:00:40.557 *********** 2025-06-02 17:31:46.323431 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:46.324287 | orchestrator | 2025-06-02 17:31:46.324312 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-02 17:31:46.324512 | orchestrator | Monday 02 June 2025 17:31:46 +0000 (0:00:00.128) 0:00:40.685 *********** 2025-06-02 17:31:46.479506 | orchestrator | ok: [testbed-node-4] => { 2025-06-02 17:31:46.480308 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-02 17:31:46.481987 | orchestrator | } 2025-06-02 17:31:46.482937 | orchestrator | 2025-06-02 17:31:46.483830 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-02 17:31:46.484350 | orchestrator | Monday 02 June 2025 17:31:46 +0000 (0:00:00.155) 0:00:40.841 *********** 2025-06-02 17:31:46.616860 | orchestrator | ok: [testbed-node-4] => { 2025-06-02 17:31:46.617337 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-02 17:31:46.617831 | orchestrator | } 2025-06-02 17:31:46.619752 | orchestrator | 2025-06-02 17:31:46.620143 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-02 17:31:46.621706 | orchestrator | Monday 02 June 2025 17:31:46 +0000 (0:00:00.138) 0:00:40.979 *********** 2025-06-02 17:31:46.750907 | orchestrator | ok: [testbed-node-4] => { 2025-06-02 17:31:46.751206 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-02 17:31:46.751239 | orchestrator | } 2025-06-02 17:31:46.752656 | orchestrator | 2025-06-02 17:31:46.752829 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-02 17:31:46.753116 | orchestrator | Monday 02 June 2025 17:31:46 +0000 (0:00:00.134) 0:00:41.113 *********** 2025-06-02 17:31:47.395160 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:31:47.395761 | orchestrator | 2025-06-02 17:31:47.396368 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-02 17:31:47.397294 | orchestrator | Monday 02 June 2025 17:31:47 +0000 (0:00:00.642) 0:00:41.756 *********** 2025-06-02 17:31:47.911924 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:31:47.912886 | orchestrator | 2025-06-02 17:31:47.913719 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-02 17:31:47.914480 | orchestrator | Monday 02 June 2025 17:31:47 +0000 (0:00:00.516) 0:00:42.273 *********** 2025-06-02 17:31:48.418117 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:31:48.418193 | orchestrator | 2025-06-02 17:31:48.419720 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-02 17:31:48.420166 | orchestrator | Monday 02 June 2025 17:31:48 +0000 (0:00:00.506) 0:00:42.779 *********** 2025-06-02 17:31:48.555169 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:31:48.555955 | orchestrator | 2025-06-02 17:31:48.556859 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-02 17:31:48.557670 | orchestrator | Monday 02 June 2025 17:31:48 +0000 (0:00:00.139) 0:00:42.918 *********** 2025-06-02 17:31:48.653042 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:48.653914 | orchestrator | 2025-06-02 17:31:48.654960 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-02 17:31:48.655746 | orchestrator | Monday 02 June 2025 17:31:48 +0000 (0:00:00.097) 0:00:43.016 *********** 2025-06-02 17:31:48.758909 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:48.760080 | orchestrator | 2025-06-02 17:31:48.761156 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-02 17:31:48.762172 | orchestrator | Monday 02 June 2025 17:31:48 +0000 (0:00:00.105) 0:00:43.121 *********** 2025-06-02 17:31:48.889076 | orchestrator | ok: [testbed-node-4] => { 2025-06-02 17:31:48.890105 | orchestrator |  "vgs_report": { 2025-06-02 17:31:48.892270 | orchestrator |  "vg": [] 2025-06-02 17:31:48.892732 | orchestrator |  } 2025-06-02 17:31:48.893460 | orchestrator | } 2025-06-02 17:31:48.894121 | orchestrator | 2025-06-02 17:31:48.895135 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-02 17:31:48.895633 | orchestrator | Monday 02 June 2025 17:31:48 +0000 (0:00:00.130) 0:00:43.252 *********** 2025-06-02 17:31:49.023397 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:49.024200 | orchestrator | 2025-06-02 17:31:49.024335 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-02 17:31:49.024884 | orchestrator | Monday 02 June 2025 17:31:49 +0000 (0:00:00.134) 0:00:43.386 *********** 2025-06-02 17:31:49.158521 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:49.160123 | orchestrator | 2025-06-02 17:31:49.160486 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-02 17:31:49.161718 | orchestrator | Monday 02 June 2025 17:31:49 +0000 (0:00:00.135) 0:00:43.521 *********** 2025-06-02 17:31:49.290126 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:49.290315 | orchestrator | 2025-06-02 17:31:49.291048 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-02 17:31:49.292124 | orchestrator | Monday 02 June 2025 17:31:49 +0000 (0:00:00.131) 0:00:43.652 *********** 2025-06-02 17:31:49.416947 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:49.418189 | orchestrator | 2025-06-02 17:31:49.418773 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-02 17:31:49.419808 | orchestrator | Monday 02 June 2025 17:31:49 +0000 (0:00:00.126) 0:00:43.779 *********** 2025-06-02 17:31:49.549445 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:49.550679 | orchestrator | 2025-06-02 17:31:49.552061 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-02 17:31:49.553239 | orchestrator | Monday 02 June 2025 17:31:49 +0000 (0:00:00.132) 0:00:43.911 *********** 2025-06-02 17:31:49.928402 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:49.929194 | orchestrator | 2025-06-02 17:31:49.930958 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-02 17:31:49.931131 | orchestrator | Monday 02 June 2025 17:31:49 +0000 (0:00:00.377) 0:00:44.289 *********** 2025-06-02 17:31:50.059635 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:50.059744 | orchestrator | 2025-06-02 17:31:50.061007 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-02 17:31:50.062323 | orchestrator | Monday 02 June 2025 17:31:50 +0000 (0:00:00.132) 0:00:44.422 *********** 2025-06-02 17:31:50.197860 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:50.199211 | orchestrator | 2025-06-02 17:31:50.199596 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-02 17:31:50.200465 | orchestrator | Monday 02 June 2025 17:31:50 +0000 (0:00:00.138) 0:00:44.560 *********** 2025-06-02 17:31:50.354714 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:50.356024 | orchestrator | 2025-06-02 17:31:50.357054 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-02 17:31:50.358243 | orchestrator | Monday 02 June 2025 17:31:50 +0000 (0:00:00.156) 0:00:44.716 *********** 2025-06-02 17:31:50.504615 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:50.505068 | orchestrator | 2025-06-02 17:31:50.506129 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-02 17:31:50.506626 | orchestrator | Monday 02 June 2025 17:31:50 +0000 (0:00:00.150) 0:00:44.867 *********** 2025-06-02 17:31:50.649829 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:50.650971 | orchestrator | 2025-06-02 17:31:50.652254 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-02 17:31:50.653270 | orchestrator | Monday 02 June 2025 17:31:50 +0000 (0:00:00.145) 0:00:45.012 *********** 2025-06-02 17:31:50.795658 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:50.796409 | orchestrator | 2025-06-02 17:31:50.797303 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-02 17:31:50.798247 | orchestrator | Monday 02 June 2025 17:31:50 +0000 (0:00:00.145) 0:00:45.158 *********** 2025-06-02 17:31:50.933074 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:50.934297 | orchestrator | 2025-06-02 17:31:50.936021 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-02 17:31:50.937075 | orchestrator | Monday 02 June 2025 17:31:50 +0000 (0:00:00.137) 0:00:45.295 *********** 2025-06-02 17:31:51.086581 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:51.087355 | orchestrator | 2025-06-02 17:31:51.088018 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-02 17:31:51.090061 | orchestrator | Monday 02 June 2025 17:31:51 +0000 (0:00:00.153) 0:00:45.448 *********** 2025-06-02 17:31:51.257294 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de836c00-0412-5e15-aa8a-abef9bebfb26', 'data_vg': 'ceph-de836c00-0412-5e15-aa8a-abef9bebfb26'})  2025-06-02 17:31:51.258136 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c404b240-9cf0-5c0e-97ba-c570a8ba4cd9', 'data_vg': 'ceph-c404b240-9cf0-5c0e-97ba-c570a8ba4cd9'})  2025-06-02 17:31:51.258750 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:51.260202 | orchestrator | 2025-06-02 17:31:51.260458 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-02 17:31:51.261671 | orchestrator | Monday 02 June 2025 17:31:51 +0000 (0:00:00.170) 0:00:45.619 *********** 2025-06-02 17:31:51.433519 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de836c00-0412-5e15-aa8a-abef9bebfb26', 'data_vg': 'ceph-de836c00-0412-5e15-aa8a-abef9bebfb26'})  2025-06-02 17:31:51.435927 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c404b240-9cf0-5c0e-97ba-c570a8ba4cd9', 'data_vg': 'ceph-c404b240-9cf0-5c0e-97ba-c570a8ba4cd9'})  2025-06-02 17:31:51.436025 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:51.437676 | orchestrator | 2025-06-02 17:31:51.438739 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-02 17:31:51.439366 | orchestrator | Monday 02 June 2025 17:31:51 +0000 (0:00:00.176) 0:00:45.795 *********** 2025-06-02 17:31:51.600373 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de836c00-0412-5e15-aa8a-abef9bebfb26', 'data_vg': 'ceph-de836c00-0412-5e15-aa8a-abef9bebfb26'})  2025-06-02 17:31:51.602929 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c404b240-9cf0-5c0e-97ba-c570a8ba4cd9', 'data_vg': 'ceph-c404b240-9cf0-5c0e-97ba-c570a8ba4cd9'})  2025-06-02 17:31:51.603507 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:51.604485 | orchestrator | 2025-06-02 17:31:51.605510 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-02 17:31:51.606078 | orchestrator | Monday 02 June 2025 17:31:51 +0000 (0:00:00.162) 0:00:45.958 *********** 2025-06-02 17:31:51.982880 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de836c00-0412-5e15-aa8a-abef9bebfb26', 'data_vg': 'ceph-de836c00-0412-5e15-aa8a-abef9bebfb26'})  2025-06-02 17:31:51.983910 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c404b240-9cf0-5c0e-97ba-c570a8ba4cd9', 'data_vg': 'ceph-c404b240-9cf0-5c0e-97ba-c570a8ba4cd9'})  2025-06-02 17:31:51.984307 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:51.985348 | orchestrator | 2025-06-02 17:31:51.985977 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-02 17:31:51.987223 | orchestrator | Monday 02 June 2025 17:31:51 +0000 (0:00:00.385) 0:00:46.344 *********** 2025-06-02 17:31:52.156794 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de836c00-0412-5e15-aa8a-abef9bebfb26', 'data_vg': 'ceph-de836c00-0412-5e15-aa8a-abef9bebfb26'})  2025-06-02 17:31:52.157778 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c404b240-9cf0-5c0e-97ba-c570a8ba4cd9', 'data_vg': 'ceph-c404b240-9cf0-5c0e-97ba-c570a8ba4cd9'})  2025-06-02 17:31:52.159734 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:52.160228 | orchestrator | 2025-06-02 17:31:52.161288 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-02 17:31:52.162006 | orchestrator | Monday 02 June 2025 17:31:52 +0000 (0:00:00.174) 0:00:46.519 *********** 2025-06-02 17:31:52.317424 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de836c00-0412-5e15-aa8a-abef9bebfb26', 'data_vg': 'ceph-de836c00-0412-5e15-aa8a-abef9bebfb26'})  2025-06-02 17:31:52.318386 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c404b240-9cf0-5c0e-97ba-c570a8ba4cd9', 'data_vg': 'ceph-c404b240-9cf0-5c0e-97ba-c570a8ba4cd9'})  2025-06-02 17:31:52.319148 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:52.320045 | orchestrator | 2025-06-02 17:31:52.320967 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-02 17:31:52.321873 | orchestrator | Monday 02 June 2025 17:31:52 +0000 (0:00:00.160) 0:00:46.679 *********** 2025-06-02 17:31:52.482689 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de836c00-0412-5e15-aa8a-abef9bebfb26', 'data_vg': 'ceph-de836c00-0412-5e15-aa8a-abef9bebfb26'})  2025-06-02 17:31:52.483805 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c404b240-9cf0-5c0e-97ba-c570a8ba4cd9', 'data_vg': 'ceph-c404b240-9cf0-5c0e-97ba-c570a8ba4cd9'})  2025-06-02 17:31:52.484522 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:52.485247 | orchestrator | 2025-06-02 17:31:52.485674 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-02 17:31:52.486162 | orchestrator | Monday 02 June 2025 17:31:52 +0000 (0:00:00.162) 0:00:46.842 *********** 2025-06-02 17:31:52.636820 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de836c00-0412-5e15-aa8a-abef9bebfb26', 'data_vg': 'ceph-de836c00-0412-5e15-aa8a-abef9bebfb26'})  2025-06-02 17:31:52.637429 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c404b240-9cf0-5c0e-97ba-c570a8ba4cd9', 'data_vg': 'ceph-c404b240-9cf0-5c0e-97ba-c570a8ba4cd9'})  2025-06-02 17:31:52.638745 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:52.639440 | orchestrator | 2025-06-02 17:31:52.640174 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-02 17:31:52.640856 | orchestrator | Monday 02 June 2025 17:31:52 +0000 (0:00:00.156) 0:00:46.998 *********** 2025-06-02 17:31:53.151421 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:31:53.154055 | orchestrator | 2025-06-02 17:31:53.154297 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-02 17:31:53.154316 | orchestrator | Monday 02 June 2025 17:31:53 +0000 (0:00:00.514) 0:00:47.513 *********** 2025-06-02 17:31:53.682446 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:31:53.683064 | orchestrator | 2025-06-02 17:31:53.684270 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-02 17:31:53.684969 | orchestrator | Monday 02 June 2025 17:31:53 +0000 (0:00:00.531) 0:00:48.044 *********** 2025-06-02 17:31:53.825311 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:31:53.825430 | orchestrator | 2025-06-02 17:31:53.826648 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-02 17:31:53.827485 | orchestrator | Monday 02 June 2025 17:31:53 +0000 (0:00:00.143) 0:00:48.188 *********** 2025-06-02 17:31:53.995138 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-c404b240-9cf0-5c0e-97ba-c570a8ba4cd9', 'vg_name': 'ceph-c404b240-9cf0-5c0e-97ba-c570a8ba4cd9'}) 2025-06-02 17:31:53.996137 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-de836c00-0412-5e15-aa8a-abef9bebfb26', 'vg_name': 'ceph-de836c00-0412-5e15-aa8a-abef9bebfb26'}) 2025-06-02 17:31:53.997238 | orchestrator | 2025-06-02 17:31:53.998947 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-02 17:31:53.998976 | orchestrator | Monday 02 June 2025 17:31:53 +0000 (0:00:00.169) 0:00:48.357 *********** 2025-06-02 17:31:54.172233 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de836c00-0412-5e15-aa8a-abef9bebfb26', 'data_vg': 'ceph-de836c00-0412-5e15-aa8a-abef9bebfb26'})  2025-06-02 17:31:54.172814 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c404b240-9cf0-5c0e-97ba-c570a8ba4cd9', 'data_vg': 'ceph-c404b240-9cf0-5c0e-97ba-c570a8ba4cd9'})  2025-06-02 17:31:54.173433 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:54.175992 | orchestrator | 2025-06-02 17:31:54.176810 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-02 17:31:54.177015 | orchestrator | Monday 02 June 2025 17:31:54 +0000 (0:00:00.176) 0:00:48.534 *********** 2025-06-02 17:31:54.329240 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de836c00-0412-5e15-aa8a-abef9bebfb26', 'data_vg': 'ceph-de836c00-0412-5e15-aa8a-abef9bebfb26'})  2025-06-02 17:31:54.330372 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c404b240-9cf0-5c0e-97ba-c570a8ba4cd9', 'data_vg': 'ceph-c404b240-9cf0-5c0e-97ba-c570a8ba4cd9'})  2025-06-02 17:31:54.331154 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:54.332502 | orchestrator | 2025-06-02 17:31:54.333709 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-02 17:31:54.334652 | orchestrator | Monday 02 June 2025 17:31:54 +0000 (0:00:00.157) 0:00:48.692 *********** 2025-06-02 17:31:54.499891 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de836c00-0412-5e15-aa8a-abef9bebfb26', 'data_vg': 'ceph-de836c00-0412-5e15-aa8a-abef9bebfb26'})  2025-06-02 17:31:54.500007 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c404b240-9cf0-5c0e-97ba-c570a8ba4cd9', 'data_vg': 'ceph-c404b240-9cf0-5c0e-97ba-c570a8ba4cd9'})  2025-06-02 17:31:54.500296 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:31:54.501136 | orchestrator | 2025-06-02 17:31:54.501770 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-02 17:31:54.502214 | orchestrator | Monday 02 June 2025 17:31:54 +0000 (0:00:00.170) 0:00:48.862 *********** 2025-06-02 17:31:55.007846 | orchestrator | ok: [testbed-node-4] => { 2025-06-02 17:31:55.008015 | orchestrator |  "lvm_report": { 2025-06-02 17:31:55.009879 | orchestrator |  "lv": [ 2025-06-02 17:31:55.011012 | orchestrator |  { 2025-06-02 17:31:55.012358 | orchestrator |  "lv_name": "osd-block-c404b240-9cf0-5c0e-97ba-c570a8ba4cd9", 2025-06-02 17:31:55.013487 | orchestrator |  "vg_name": "ceph-c404b240-9cf0-5c0e-97ba-c570a8ba4cd9" 2025-06-02 17:31:55.014256 | orchestrator |  }, 2025-06-02 17:31:55.015190 | orchestrator |  { 2025-06-02 17:31:55.015383 | orchestrator |  "lv_name": "osd-block-de836c00-0412-5e15-aa8a-abef9bebfb26", 2025-06-02 17:31:55.015925 | orchestrator |  "vg_name": "ceph-de836c00-0412-5e15-aa8a-abef9bebfb26" 2025-06-02 17:31:55.016150 | orchestrator |  } 2025-06-02 17:31:55.016714 | orchestrator |  ], 2025-06-02 17:31:55.017440 | orchestrator |  "pv": [ 2025-06-02 17:31:55.017994 | orchestrator |  { 2025-06-02 17:31:55.018678 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-02 17:31:55.019002 | orchestrator |  "vg_name": "ceph-de836c00-0412-5e15-aa8a-abef9bebfb26" 2025-06-02 17:31:55.019408 | orchestrator |  }, 2025-06-02 17:31:55.019872 | orchestrator |  { 2025-06-02 17:31:55.020577 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-02 17:31:55.021251 | orchestrator |  "vg_name": "ceph-c404b240-9cf0-5c0e-97ba-c570a8ba4cd9" 2025-06-02 17:31:55.022628 | orchestrator |  } 2025-06-02 17:31:55.023099 | orchestrator |  ] 2025-06-02 17:31:55.024040 | orchestrator |  } 2025-06-02 17:31:55.024746 | orchestrator | } 2025-06-02 17:31:55.025265 | orchestrator | 2025-06-02 17:31:55.025711 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-02 17:31:55.026426 | orchestrator | 2025-06-02 17:31:55.026942 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-02 17:31:55.027461 | orchestrator | Monday 02 June 2025 17:31:55 +0000 (0:00:00.507) 0:00:49.370 *********** 2025-06-02 17:31:55.267024 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-02 17:31:55.267701 | orchestrator | 2025-06-02 17:31:55.269125 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-02 17:31:55.269548 | orchestrator | Monday 02 June 2025 17:31:55 +0000 (0:00:00.259) 0:00:49.629 *********** 2025-06-02 17:31:55.505384 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:31:55.506279 | orchestrator | 2025-06-02 17:31:55.507466 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:55.508468 | orchestrator | Monday 02 June 2025 17:31:55 +0000 (0:00:00.237) 0:00:49.867 *********** 2025-06-02 17:31:56.021298 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-06-02 17:31:56.022722 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-06-02 17:31:56.023932 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-06-02 17:31:56.026325 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-06-02 17:31:56.027762 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-06-02 17:31:56.029018 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-06-02 17:31:56.030850 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-06-02 17:31:56.031371 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-06-02 17:31:56.032136 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-06-02 17:31:56.033006 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-06-02 17:31:56.033487 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-06-02 17:31:56.034103 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-06-02 17:31:56.034591 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-06-02 17:31:56.035138 | orchestrator | 2025-06-02 17:31:56.036599 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:56.036628 | orchestrator | Monday 02 June 2025 17:31:56 +0000 (0:00:00.515) 0:00:50.383 *********** 2025-06-02 17:31:56.231035 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:31:56.231560 | orchestrator | 2025-06-02 17:31:56.232436 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:56.234644 | orchestrator | Monday 02 June 2025 17:31:56 +0000 (0:00:00.210) 0:00:50.593 *********** 2025-06-02 17:31:56.447739 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:31:56.448299 | orchestrator | 2025-06-02 17:31:56.449274 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:56.449994 | orchestrator | Monday 02 June 2025 17:31:56 +0000 (0:00:00.216) 0:00:50.809 *********** 2025-06-02 17:31:56.648146 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:31:56.650198 | orchestrator | 2025-06-02 17:31:56.652953 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:56.653000 | orchestrator | Monday 02 June 2025 17:31:56 +0000 (0:00:00.200) 0:00:51.009 *********** 2025-06-02 17:31:56.862321 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:31:56.863122 | orchestrator | 2025-06-02 17:31:56.863899 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:56.864858 | orchestrator | Monday 02 June 2025 17:31:56 +0000 (0:00:00.214) 0:00:51.224 *********** 2025-06-02 17:31:57.060648 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:31:57.062624 | orchestrator | 2025-06-02 17:31:57.062665 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:57.063889 | orchestrator | Monday 02 June 2025 17:31:57 +0000 (0:00:00.197) 0:00:51.422 *********** 2025-06-02 17:31:57.699353 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:31:57.699459 | orchestrator | 2025-06-02 17:31:57.700655 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:57.701354 | orchestrator | Monday 02 June 2025 17:31:57 +0000 (0:00:00.638) 0:00:52.061 *********** 2025-06-02 17:31:57.913291 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:31:57.913932 | orchestrator | 2025-06-02 17:31:57.914595 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:57.915575 | orchestrator | Monday 02 June 2025 17:31:57 +0000 (0:00:00.214) 0:00:52.276 *********** 2025-06-02 17:31:58.108579 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:31:58.109119 | orchestrator | 2025-06-02 17:31:58.110131 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:58.110961 | orchestrator | Monday 02 June 2025 17:31:58 +0000 (0:00:00.193) 0:00:52.469 *********** 2025-06-02 17:31:58.567333 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_33e5408c-eac4-45cf-8284-ea43471071f8) 2025-06-02 17:31:58.567481 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_33e5408c-eac4-45cf-8284-ea43471071f8) 2025-06-02 17:31:58.568495 | orchestrator | 2025-06-02 17:31:58.569705 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:58.570676 | orchestrator | Monday 02 June 2025 17:31:58 +0000 (0:00:00.457) 0:00:52.926 *********** 2025-06-02 17:31:58.976650 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_cc6b7f8a-a299-449d-8912-3815da19ff1f) 2025-06-02 17:31:58.977900 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_cc6b7f8a-a299-449d-8912-3815da19ff1f) 2025-06-02 17:31:58.979113 | orchestrator | 2025-06-02 17:31:58.980459 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:58.981072 | orchestrator | Monday 02 June 2025 17:31:58 +0000 (0:00:00.411) 0:00:53.338 *********** 2025-06-02 17:31:59.421063 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_fb369b5e-a271-4fa4-9f85-1311171daecb) 2025-06-02 17:31:59.421266 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_fb369b5e-a271-4fa4-9f85-1311171daecb) 2025-06-02 17:31:59.422423 | orchestrator | 2025-06-02 17:31:59.423162 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:59.423619 | orchestrator | Monday 02 June 2025 17:31:59 +0000 (0:00:00.443) 0:00:53.781 *********** 2025-06-02 17:31:59.866258 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_6f5db02e-386c-41b9-ae07-b7cce6e0964a) 2025-06-02 17:31:59.866447 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_6f5db02e-386c-41b9-ae07-b7cce6e0964a) 2025-06-02 17:31:59.867349 | orchestrator | 2025-06-02 17:31:59.867733 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:59.868347 | orchestrator | Monday 02 June 2025 17:31:59 +0000 (0:00:00.447) 0:00:54.229 *********** 2025-06-02 17:32:00.224157 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-02 17:32:00.224916 | orchestrator | 2025-06-02 17:32:00.226230 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:32:00.227176 | orchestrator | Monday 02 June 2025 17:32:00 +0000 (0:00:00.355) 0:00:54.584 *********** 2025-06-02 17:32:00.629260 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-06-02 17:32:00.629943 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-06-02 17:32:00.631062 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-06-02 17:32:00.632986 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-06-02 17:32:00.634125 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-06-02 17:32:00.634974 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-06-02 17:32:00.635787 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-06-02 17:32:00.636830 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-06-02 17:32:00.638437 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-06-02 17:32:00.638651 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-06-02 17:32:00.639710 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-06-02 17:32:00.642013 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-06-02 17:32:00.642350 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-06-02 17:32:00.643097 | orchestrator | 2025-06-02 17:32:00.643633 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:32:00.644205 | orchestrator | Monday 02 June 2025 17:32:00 +0000 (0:00:00.406) 0:00:54.991 *********** 2025-06-02 17:32:00.827711 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:00.828227 | orchestrator | 2025-06-02 17:32:00.829500 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:32:00.830246 | orchestrator | Monday 02 June 2025 17:32:00 +0000 (0:00:00.198) 0:00:55.189 *********** 2025-06-02 17:32:01.040213 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:01.042341 | orchestrator | 2025-06-02 17:32:01.042770 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:32:01.044454 | orchestrator | Monday 02 June 2025 17:32:01 +0000 (0:00:00.213) 0:00:55.402 *********** 2025-06-02 17:32:01.686871 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:01.687075 | orchestrator | 2025-06-02 17:32:01.690133 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:32:01.690213 | orchestrator | Monday 02 June 2025 17:32:01 +0000 (0:00:00.644) 0:00:56.047 *********** 2025-06-02 17:32:01.896053 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:01.896741 | orchestrator | 2025-06-02 17:32:01.897505 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:32:01.898766 | orchestrator | Monday 02 June 2025 17:32:01 +0000 (0:00:00.210) 0:00:56.258 *********** 2025-06-02 17:32:02.101895 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:02.102285 | orchestrator | 2025-06-02 17:32:02.103367 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:32:02.104232 | orchestrator | Monday 02 June 2025 17:32:02 +0000 (0:00:00.205) 0:00:56.464 *********** 2025-06-02 17:32:02.331442 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:02.332414 | orchestrator | 2025-06-02 17:32:02.333304 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:32:02.334199 | orchestrator | Monday 02 June 2025 17:32:02 +0000 (0:00:00.228) 0:00:56.692 *********** 2025-06-02 17:32:02.565174 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:02.565366 | orchestrator | 2025-06-02 17:32:02.569497 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:32:02.570793 | orchestrator | Monday 02 June 2025 17:32:02 +0000 (0:00:00.233) 0:00:56.926 *********** 2025-06-02 17:32:02.768828 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:02.769003 | orchestrator | 2025-06-02 17:32:02.769720 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:32:02.769989 | orchestrator | Monday 02 June 2025 17:32:02 +0000 (0:00:00.205) 0:00:57.131 *********** 2025-06-02 17:32:03.421269 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-06-02 17:32:03.422459 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-06-02 17:32:03.424411 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-06-02 17:32:03.425201 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-06-02 17:32:03.426440 | orchestrator | 2025-06-02 17:32:03.427433 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:32:03.428331 | orchestrator | Monday 02 June 2025 17:32:03 +0000 (0:00:00.650) 0:00:57.782 *********** 2025-06-02 17:32:03.622180 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:03.622292 | orchestrator | 2025-06-02 17:32:03.623436 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:32:03.624128 | orchestrator | Monday 02 June 2025 17:32:03 +0000 (0:00:00.200) 0:00:57.983 *********** 2025-06-02 17:32:03.821995 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:03.822918 | orchestrator | 2025-06-02 17:32:03.824215 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:32:03.825412 | orchestrator | Monday 02 June 2025 17:32:03 +0000 (0:00:00.199) 0:00:58.183 *********** 2025-06-02 17:32:04.030871 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:04.031567 | orchestrator | 2025-06-02 17:32:04.032260 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:32:04.032841 | orchestrator | Monday 02 June 2025 17:32:04 +0000 (0:00:00.210) 0:00:58.393 *********** 2025-06-02 17:32:04.236975 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:04.238331 | orchestrator | 2025-06-02 17:32:04.239500 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-02 17:32:04.241117 | orchestrator | Monday 02 June 2025 17:32:04 +0000 (0:00:00.204) 0:00:58.598 *********** 2025-06-02 17:32:04.589935 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:04.590126 | orchestrator | 2025-06-02 17:32:04.590210 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-02 17:32:04.590494 | orchestrator | Monday 02 June 2025 17:32:04 +0000 (0:00:00.354) 0:00:58.952 *********** 2025-06-02 17:32:04.777431 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '33d58ee2-4c10-58b1-ba9c-becc4d68c01c'}}) 2025-06-02 17:32:04.778255 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b'}}) 2025-06-02 17:32:04.779337 | orchestrator | 2025-06-02 17:32:04.780718 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-02 17:32:04.781659 | orchestrator | Monday 02 June 2025 17:32:04 +0000 (0:00:00.186) 0:00:59.139 *********** 2025-06-02 17:32:06.612642 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-33d58ee2-4c10-58b1-ba9c-becc4d68c01c', 'data_vg': 'ceph-33d58ee2-4c10-58b1-ba9c-becc4d68c01c'}) 2025-06-02 17:32:06.613890 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b', 'data_vg': 'ceph-a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b'}) 2025-06-02 17:32:06.616499 | orchestrator | 2025-06-02 17:32:06.617688 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-02 17:32:06.618875 | orchestrator | Monday 02 June 2025 17:32:06 +0000 (0:00:01.833) 0:01:00.972 *********** 2025-06-02 17:32:06.767809 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-33d58ee2-4c10-58b1-ba9c-becc4d68c01c', 'data_vg': 'ceph-33d58ee2-4c10-58b1-ba9c-becc4d68c01c'})  2025-06-02 17:32:06.767964 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b', 'data_vg': 'ceph-a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b'})  2025-06-02 17:32:06.768693 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:06.769109 | orchestrator | 2025-06-02 17:32:06.769810 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-02 17:32:06.770818 | orchestrator | Monday 02 June 2025 17:32:06 +0000 (0:00:00.157) 0:01:01.130 *********** 2025-06-02 17:32:08.084743 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-33d58ee2-4c10-58b1-ba9c-becc4d68c01c', 'data_vg': 'ceph-33d58ee2-4c10-58b1-ba9c-becc4d68c01c'}) 2025-06-02 17:32:08.086943 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b', 'data_vg': 'ceph-a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b'}) 2025-06-02 17:32:08.088246 | orchestrator | 2025-06-02 17:32:08.089187 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-02 17:32:08.091975 | orchestrator | Monday 02 June 2025 17:32:08 +0000 (0:00:01.313) 0:01:02.443 *********** 2025-06-02 17:32:08.245419 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-33d58ee2-4c10-58b1-ba9c-becc4d68c01c', 'data_vg': 'ceph-33d58ee2-4c10-58b1-ba9c-becc4d68c01c'})  2025-06-02 17:32:08.245628 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b', 'data_vg': 'ceph-a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b'})  2025-06-02 17:32:08.246373 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:08.246551 | orchestrator | 2025-06-02 17:32:08.247657 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-02 17:32:08.247900 | orchestrator | Monday 02 June 2025 17:32:08 +0000 (0:00:00.163) 0:01:02.606 *********** 2025-06-02 17:32:08.408211 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:08.408421 | orchestrator | 2025-06-02 17:32:08.409734 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-02 17:32:08.410662 | orchestrator | Monday 02 June 2025 17:32:08 +0000 (0:00:00.163) 0:01:02.770 *********** 2025-06-02 17:32:08.582263 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-33d58ee2-4c10-58b1-ba9c-becc4d68c01c', 'data_vg': 'ceph-33d58ee2-4c10-58b1-ba9c-becc4d68c01c'})  2025-06-02 17:32:08.583380 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b', 'data_vg': 'ceph-a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b'})  2025-06-02 17:32:08.584353 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:08.586392 | orchestrator | 2025-06-02 17:32:08.587133 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-02 17:32:08.588706 | orchestrator | Monday 02 June 2025 17:32:08 +0000 (0:00:00.171) 0:01:02.941 *********** 2025-06-02 17:32:08.716332 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:08.716907 | orchestrator | 2025-06-02 17:32:08.717990 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-02 17:32:08.718869 | orchestrator | Monday 02 June 2025 17:32:08 +0000 (0:00:00.134) 0:01:03.076 *********** 2025-06-02 17:32:08.867110 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-33d58ee2-4c10-58b1-ba9c-becc4d68c01c', 'data_vg': 'ceph-33d58ee2-4c10-58b1-ba9c-becc4d68c01c'})  2025-06-02 17:32:08.868108 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b', 'data_vg': 'ceph-a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b'})  2025-06-02 17:32:08.869270 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:08.869976 | orchestrator | 2025-06-02 17:32:08.871379 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-02 17:32:08.871829 | orchestrator | Monday 02 June 2025 17:32:08 +0000 (0:00:00.153) 0:01:03.229 *********** 2025-06-02 17:32:09.011254 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:09.013584 | orchestrator | 2025-06-02 17:32:09.013616 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-02 17:32:09.014773 | orchestrator | Monday 02 June 2025 17:32:09 +0000 (0:00:00.141) 0:01:03.371 *********** 2025-06-02 17:32:09.177967 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-33d58ee2-4c10-58b1-ba9c-becc4d68c01c', 'data_vg': 'ceph-33d58ee2-4c10-58b1-ba9c-becc4d68c01c'})  2025-06-02 17:32:09.179152 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b', 'data_vg': 'ceph-a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b'})  2025-06-02 17:32:09.180748 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:09.181903 | orchestrator | 2025-06-02 17:32:09.183504 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-02 17:32:09.185135 | orchestrator | Monday 02 June 2025 17:32:09 +0000 (0:00:00.168) 0:01:03.539 *********** 2025-06-02 17:32:09.332041 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:32:09.332822 | orchestrator | 2025-06-02 17:32:09.333657 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-02 17:32:09.334291 | orchestrator | Monday 02 June 2025 17:32:09 +0000 (0:00:00.154) 0:01:03.694 *********** 2025-06-02 17:32:09.731752 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-33d58ee2-4c10-58b1-ba9c-becc4d68c01c', 'data_vg': 'ceph-33d58ee2-4c10-58b1-ba9c-becc4d68c01c'})  2025-06-02 17:32:09.731821 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b', 'data_vg': 'ceph-a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b'})  2025-06-02 17:32:09.732733 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:09.733110 | orchestrator | 2025-06-02 17:32:09.733975 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-02 17:32:09.734950 | orchestrator | Monday 02 June 2025 17:32:09 +0000 (0:00:00.398) 0:01:04.093 *********** 2025-06-02 17:32:09.891744 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-33d58ee2-4c10-58b1-ba9c-becc4d68c01c', 'data_vg': 'ceph-33d58ee2-4c10-58b1-ba9c-becc4d68c01c'})  2025-06-02 17:32:09.892071 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b', 'data_vg': 'ceph-a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b'})  2025-06-02 17:32:09.892997 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:09.894836 | orchestrator | 2025-06-02 17:32:09.896246 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-02 17:32:09.897986 | orchestrator | Monday 02 June 2025 17:32:09 +0000 (0:00:00.161) 0:01:04.254 *********** 2025-06-02 17:32:10.059421 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-33d58ee2-4c10-58b1-ba9c-becc4d68c01c', 'data_vg': 'ceph-33d58ee2-4c10-58b1-ba9c-becc4d68c01c'})  2025-06-02 17:32:10.059751 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b', 'data_vg': 'ceph-a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b'})  2025-06-02 17:32:10.060556 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:10.061484 | orchestrator | 2025-06-02 17:32:10.062772 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-02 17:32:10.063779 | orchestrator | Monday 02 June 2025 17:32:10 +0000 (0:00:00.167) 0:01:04.422 *********** 2025-06-02 17:32:10.217588 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:10.217973 | orchestrator | 2025-06-02 17:32:10.220105 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-02 17:32:10.221191 | orchestrator | Monday 02 June 2025 17:32:10 +0000 (0:00:00.157) 0:01:04.579 *********** 2025-06-02 17:32:10.360679 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:10.360729 | orchestrator | 2025-06-02 17:32:10.361699 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-02 17:32:10.362404 | orchestrator | Monday 02 June 2025 17:32:10 +0000 (0:00:00.143) 0:01:04.723 *********** 2025-06-02 17:32:10.507261 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:10.507826 | orchestrator | 2025-06-02 17:32:10.508813 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-02 17:32:10.509232 | orchestrator | Monday 02 June 2025 17:32:10 +0000 (0:00:00.146) 0:01:04.869 *********** 2025-06-02 17:32:10.658185 | orchestrator | ok: [testbed-node-5] => { 2025-06-02 17:32:10.658435 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-02 17:32:10.659247 | orchestrator | } 2025-06-02 17:32:10.660473 | orchestrator | 2025-06-02 17:32:10.661072 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-02 17:32:10.661616 | orchestrator | Monday 02 June 2025 17:32:10 +0000 (0:00:00.149) 0:01:05.019 *********** 2025-06-02 17:32:10.813869 | orchestrator | ok: [testbed-node-5] => { 2025-06-02 17:32:10.815681 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-02 17:32:10.816044 | orchestrator | } 2025-06-02 17:32:10.817912 | orchestrator | 2025-06-02 17:32:10.818961 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-02 17:32:10.819702 | orchestrator | Monday 02 June 2025 17:32:10 +0000 (0:00:00.155) 0:01:05.175 *********** 2025-06-02 17:32:10.958294 | orchestrator | ok: [testbed-node-5] => { 2025-06-02 17:32:10.959700 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-02 17:32:10.960574 | orchestrator | } 2025-06-02 17:32:10.961289 | orchestrator | 2025-06-02 17:32:10.962831 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-02 17:32:10.964007 | orchestrator | Monday 02 June 2025 17:32:10 +0000 (0:00:00.143) 0:01:05.319 *********** 2025-06-02 17:32:11.482833 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:32:11.483003 | orchestrator | 2025-06-02 17:32:11.483419 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-02 17:32:11.484065 | orchestrator | Monday 02 June 2025 17:32:11 +0000 (0:00:00.524) 0:01:05.844 *********** 2025-06-02 17:32:12.000112 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:32:12.000245 | orchestrator | 2025-06-02 17:32:12.000629 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-02 17:32:12.001348 | orchestrator | Monday 02 June 2025 17:32:11 +0000 (0:00:00.515) 0:01:06.359 *********** 2025-06-02 17:32:12.514586 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:32:12.515652 | orchestrator | 2025-06-02 17:32:12.516400 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-02 17:32:12.517421 | orchestrator | Monday 02 June 2025 17:32:12 +0000 (0:00:00.517) 0:01:06.876 *********** 2025-06-02 17:32:12.878388 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:32:12.879624 | orchestrator | 2025-06-02 17:32:12.881266 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-02 17:32:12.881297 | orchestrator | Monday 02 June 2025 17:32:12 +0000 (0:00:00.363) 0:01:07.239 *********** 2025-06-02 17:32:13.000500 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:13.001553 | orchestrator | 2025-06-02 17:32:13.003022 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-02 17:32:13.004214 | orchestrator | Monday 02 June 2025 17:32:12 +0000 (0:00:00.123) 0:01:07.363 *********** 2025-06-02 17:32:13.131361 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:13.132506 | orchestrator | 2025-06-02 17:32:13.134148 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-02 17:32:13.134375 | orchestrator | Monday 02 June 2025 17:32:13 +0000 (0:00:00.129) 0:01:07.492 *********** 2025-06-02 17:32:13.283230 | orchestrator | ok: [testbed-node-5] => { 2025-06-02 17:32:13.283314 | orchestrator |  "vgs_report": { 2025-06-02 17:32:13.283330 | orchestrator |  "vg": [] 2025-06-02 17:32:13.284000 | orchestrator |  } 2025-06-02 17:32:13.285495 | orchestrator | } 2025-06-02 17:32:13.285842 | orchestrator | 2025-06-02 17:32:13.286610 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-02 17:32:13.287460 | orchestrator | Monday 02 June 2025 17:32:13 +0000 (0:00:00.148) 0:01:07.641 *********** 2025-06-02 17:32:13.423650 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:13.424574 | orchestrator | 2025-06-02 17:32:13.425386 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-02 17:32:13.426322 | orchestrator | Monday 02 June 2025 17:32:13 +0000 (0:00:00.143) 0:01:07.784 *********** 2025-06-02 17:32:13.576427 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:13.577996 | orchestrator | 2025-06-02 17:32:13.580865 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-02 17:32:13.582013 | orchestrator | Monday 02 June 2025 17:32:13 +0000 (0:00:00.154) 0:01:07.939 *********** 2025-06-02 17:32:13.720445 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:13.721433 | orchestrator | 2025-06-02 17:32:13.722628 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-02 17:32:13.723995 | orchestrator | Monday 02 June 2025 17:32:13 +0000 (0:00:00.141) 0:01:08.080 *********** 2025-06-02 17:32:13.860963 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:13.862459 | orchestrator | 2025-06-02 17:32:13.863448 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-02 17:32:13.864543 | orchestrator | Monday 02 June 2025 17:32:13 +0000 (0:00:00.142) 0:01:08.223 *********** 2025-06-02 17:32:13.995030 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:13.996415 | orchestrator | 2025-06-02 17:32:13.998677 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-02 17:32:13.999999 | orchestrator | Monday 02 June 2025 17:32:13 +0000 (0:00:00.133) 0:01:08.356 *********** 2025-06-02 17:32:14.149225 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:14.150112 | orchestrator | 2025-06-02 17:32:14.151013 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-02 17:32:14.153397 | orchestrator | Monday 02 June 2025 17:32:14 +0000 (0:00:00.154) 0:01:08.511 *********** 2025-06-02 17:32:14.289810 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:14.290694 | orchestrator | 2025-06-02 17:32:14.291841 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-02 17:32:14.292599 | orchestrator | Monday 02 June 2025 17:32:14 +0000 (0:00:00.140) 0:01:08.652 *********** 2025-06-02 17:32:14.462487 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:14.462881 | orchestrator | 2025-06-02 17:32:14.464582 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-02 17:32:14.465022 | orchestrator | Monday 02 June 2025 17:32:14 +0000 (0:00:00.170) 0:01:08.822 *********** 2025-06-02 17:32:14.844707 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:14.845256 | orchestrator | 2025-06-02 17:32:14.845959 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-02 17:32:14.846243 | orchestrator | Monday 02 June 2025 17:32:14 +0000 (0:00:00.380) 0:01:09.203 *********** 2025-06-02 17:32:14.995900 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:14.995989 | orchestrator | 2025-06-02 17:32:14.996057 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-02 17:32:14.997406 | orchestrator | Monday 02 June 2025 17:32:14 +0000 (0:00:00.154) 0:01:09.357 *********** 2025-06-02 17:32:15.144830 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:15.145892 | orchestrator | 2025-06-02 17:32:15.147162 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-02 17:32:15.148273 | orchestrator | Monday 02 June 2025 17:32:15 +0000 (0:00:00.149) 0:01:09.506 *********** 2025-06-02 17:32:15.296931 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:15.298337 | orchestrator | 2025-06-02 17:32:15.299180 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-02 17:32:15.300216 | orchestrator | Monday 02 June 2025 17:32:15 +0000 (0:00:00.152) 0:01:09.659 *********** 2025-06-02 17:32:15.459269 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:15.460211 | orchestrator | 2025-06-02 17:32:15.461118 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-02 17:32:15.462256 | orchestrator | Monday 02 June 2025 17:32:15 +0000 (0:00:00.161) 0:01:09.821 *********** 2025-06-02 17:32:15.603627 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:15.605038 | orchestrator | 2025-06-02 17:32:15.606127 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-02 17:32:15.607155 | orchestrator | Monday 02 June 2025 17:32:15 +0000 (0:00:00.143) 0:01:09.965 *********** 2025-06-02 17:32:15.765217 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-33d58ee2-4c10-58b1-ba9c-becc4d68c01c', 'data_vg': 'ceph-33d58ee2-4c10-58b1-ba9c-becc4d68c01c'})  2025-06-02 17:32:15.765296 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b', 'data_vg': 'ceph-a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b'})  2025-06-02 17:32:15.766109 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:15.766149 | orchestrator | 2025-06-02 17:32:15.766409 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-02 17:32:15.766791 | orchestrator | Monday 02 June 2025 17:32:15 +0000 (0:00:00.161) 0:01:10.126 *********** 2025-06-02 17:32:15.917477 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-33d58ee2-4c10-58b1-ba9c-becc4d68c01c', 'data_vg': 'ceph-33d58ee2-4c10-58b1-ba9c-becc4d68c01c'})  2025-06-02 17:32:15.919272 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b', 'data_vg': 'ceph-a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b'})  2025-06-02 17:32:15.920274 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:15.921446 | orchestrator | 2025-06-02 17:32:15.922297 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-02 17:32:15.923071 | orchestrator | Monday 02 June 2025 17:32:15 +0000 (0:00:00.153) 0:01:10.280 *********** 2025-06-02 17:32:16.088692 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-33d58ee2-4c10-58b1-ba9c-becc4d68c01c', 'data_vg': 'ceph-33d58ee2-4c10-58b1-ba9c-becc4d68c01c'})  2025-06-02 17:32:16.088975 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b', 'data_vg': 'ceph-a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b'})  2025-06-02 17:32:16.092391 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:16.096634 | orchestrator | 2025-06-02 17:32:16.096672 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-02 17:32:16.096891 | orchestrator | Monday 02 June 2025 17:32:16 +0000 (0:00:00.168) 0:01:10.449 *********** 2025-06-02 17:32:16.243029 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-33d58ee2-4c10-58b1-ba9c-becc4d68c01c', 'data_vg': 'ceph-33d58ee2-4c10-58b1-ba9c-becc4d68c01c'})  2025-06-02 17:32:16.246674 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b', 'data_vg': 'ceph-a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b'})  2025-06-02 17:32:16.248761 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:16.248839 | orchestrator | 2025-06-02 17:32:16.249689 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-02 17:32:16.252018 | orchestrator | Monday 02 June 2025 17:32:16 +0000 (0:00:00.156) 0:01:10.605 *********** 2025-06-02 17:32:16.407670 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-33d58ee2-4c10-58b1-ba9c-becc4d68c01c', 'data_vg': 'ceph-33d58ee2-4c10-58b1-ba9c-becc4d68c01c'})  2025-06-02 17:32:16.408361 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b', 'data_vg': 'ceph-a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b'})  2025-06-02 17:32:16.408964 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:16.410867 | orchestrator | 2025-06-02 17:32:16.410893 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-02 17:32:16.411138 | orchestrator | Monday 02 June 2025 17:32:16 +0000 (0:00:00.162) 0:01:10.768 *********** 2025-06-02 17:32:16.571286 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-33d58ee2-4c10-58b1-ba9c-becc4d68c01c', 'data_vg': 'ceph-33d58ee2-4c10-58b1-ba9c-becc4d68c01c'})  2025-06-02 17:32:16.571785 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b', 'data_vg': 'ceph-a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b'})  2025-06-02 17:32:16.572488 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:16.573315 | orchestrator | 2025-06-02 17:32:16.574574 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-02 17:32:16.575608 | orchestrator | Monday 02 June 2025 17:32:16 +0000 (0:00:00.160) 0:01:10.929 *********** 2025-06-02 17:32:16.992981 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-33d58ee2-4c10-58b1-ba9c-becc4d68c01c', 'data_vg': 'ceph-33d58ee2-4c10-58b1-ba9c-becc4d68c01c'})  2025-06-02 17:32:16.993117 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b', 'data_vg': 'ceph-a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b'})  2025-06-02 17:32:16.993481 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:16.994308 | orchestrator | 2025-06-02 17:32:16.995341 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-02 17:32:16.995362 | orchestrator | Monday 02 June 2025 17:32:16 +0000 (0:00:00.425) 0:01:11.354 *********** 2025-06-02 17:32:17.149670 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-33d58ee2-4c10-58b1-ba9c-becc4d68c01c', 'data_vg': 'ceph-33d58ee2-4c10-58b1-ba9c-becc4d68c01c'})  2025-06-02 17:32:17.149786 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b', 'data_vg': 'ceph-a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b'})  2025-06-02 17:32:17.152642 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:17.152682 | orchestrator | 2025-06-02 17:32:17.152697 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-02 17:32:17.153748 | orchestrator | Monday 02 June 2025 17:32:17 +0000 (0:00:00.156) 0:01:11.510 *********** 2025-06-02 17:32:17.680611 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:32:17.681940 | orchestrator | 2025-06-02 17:32:17.682928 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-02 17:32:17.684006 | orchestrator | Monday 02 June 2025 17:32:17 +0000 (0:00:00.531) 0:01:12.042 *********** 2025-06-02 17:32:18.203470 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:32:18.204755 | orchestrator | 2025-06-02 17:32:18.205111 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-02 17:32:18.206253 | orchestrator | Monday 02 June 2025 17:32:18 +0000 (0:00:00.522) 0:01:12.565 *********** 2025-06-02 17:32:18.361036 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:32:18.361581 | orchestrator | 2025-06-02 17:32:18.362303 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-02 17:32:18.364375 | orchestrator | Monday 02 June 2025 17:32:18 +0000 (0:00:00.157) 0:01:12.722 *********** 2025-06-02 17:32:18.526116 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-33d58ee2-4c10-58b1-ba9c-becc4d68c01c', 'vg_name': 'ceph-33d58ee2-4c10-58b1-ba9c-becc4d68c01c'}) 2025-06-02 17:32:18.527747 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b', 'vg_name': 'ceph-a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b'}) 2025-06-02 17:32:18.528304 | orchestrator | 2025-06-02 17:32:18.530642 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-02 17:32:18.530685 | orchestrator | Monday 02 June 2025 17:32:18 +0000 (0:00:00.165) 0:01:12.888 *********** 2025-06-02 17:32:18.678263 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-33d58ee2-4c10-58b1-ba9c-becc4d68c01c', 'data_vg': 'ceph-33d58ee2-4c10-58b1-ba9c-becc4d68c01c'})  2025-06-02 17:32:18.678892 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b', 'data_vg': 'ceph-a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b'})  2025-06-02 17:32:18.679550 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:18.681172 | orchestrator | 2025-06-02 17:32:18.683061 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-02 17:32:18.683794 | orchestrator | Monday 02 June 2025 17:32:18 +0000 (0:00:00.151) 0:01:13.039 *********** 2025-06-02 17:32:18.836081 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-33d58ee2-4c10-58b1-ba9c-becc4d68c01c', 'data_vg': 'ceph-33d58ee2-4c10-58b1-ba9c-becc4d68c01c'})  2025-06-02 17:32:18.836150 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b', 'data_vg': 'ceph-a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b'})  2025-06-02 17:32:18.836735 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:18.837760 | orchestrator | 2025-06-02 17:32:18.837922 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-02 17:32:18.838639 | orchestrator | Monday 02 June 2025 17:32:18 +0000 (0:00:00.156) 0:01:13.196 *********** 2025-06-02 17:32:19.008996 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-33d58ee2-4c10-58b1-ba9c-becc4d68c01c', 'data_vg': 'ceph-33d58ee2-4c10-58b1-ba9c-becc4d68c01c'})  2025-06-02 17:32:19.010071 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b', 'data_vg': 'ceph-a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b'})  2025-06-02 17:32:19.011202 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:19.012922 | orchestrator | 2025-06-02 17:32:19.013042 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-02 17:32:19.014086 | orchestrator | Monday 02 June 2025 17:32:19 +0000 (0:00:00.174) 0:01:13.371 *********** 2025-06-02 17:32:19.160619 | orchestrator | ok: [testbed-node-5] => { 2025-06-02 17:32:19.161299 | orchestrator |  "lvm_report": { 2025-06-02 17:32:19.162634 | orchestrator |  "lv": [ 2025-06-02 17:32:19.163211 | orchestrator |  { 2025-06-02 17:32:19.164575 | orchestrator |  "lv_name": "osd-block-33d58ee2-4c10-58b1-ba9c-becc4d68c01c", 2025-06-02 17:32:19.165452 | orchestrator |  "vg_name": "ceph-33d58ee2-4c10-58b1-ba9c-becc4d68c01c" 2025-06-02 17:32:19.165949 | orchestrator |  }, 2025-06-02 17:32:19.166841 | orchestrator |  { 2025-06-02 17:32:19.167476 | orchestrator |  "lv_name": "osd-block-a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b", 2025-06-02 17:32:19.168354 | orchestrator |  "vg_name": "ceph-a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b" 2025-06-02 17:32:19.168852 | orchestrator |  } 2025-06-02 17:32:19.169588 | orchestrator |  ], 2025-06-02 17:32:19.170481 | orchestrator |  "pv": [ 2025-06-02 17:32:19.171113 | orchestrator |  { 2025-06-02 17:32:19.172149 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-02 17:32:19.172474 | orchestrator |  "vg_name": "ceph-33d58ee2-4c10-58b1-ba9c-becc4d68c01c" 2025-06-02 17:32:19.173327 | orchestrator |  }, 2025-06-02 17:32:19.173774 | orchestrator |  { 2025-06-02 17:32:19.174827 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-02 17:32:19.175274 | orchestrator |  "vg_name": "ceph-a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b" 2025-06-02 17:32:19.176307 | orchestrator |  } 2025-06-02 17:32:19.177010 | orchestrator |  ] 2025-06-02 17:32:19.177399 | orchestrator |  } 2025-06-02 17:32:19.178192 | orchestrator | } 2025-06-02 17:32:19.179354 | orchestrator | 2025-06-02 17:32:19.180251 | orchestrator | 2025-06-02 17:32:19 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 17:32:19.180295 | orchestrator | 2025-06-02 17:32:19 | INFO  | Please wait and do not abort execution. 2025-06-02 17:32:19.180422 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:32:19.180999 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-02 17:32:19.181323 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-02 17:32:19.181708 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-02 17:32:19.182230 | orchestrator | 2025-06-02 17:32:19.182432 | orchestrator | 2025-06-02 17:32:19.182783 | orchestrator | 2025-06-02 17:32:19.183196 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:32:19.183399 | orchestrator | Monday 02 June 2025 17:32:19 +0000 (0:00:00.151) 0:01:13.522 *********** 2025-06-02 17:32:19.183799 | orchestrator | =============================================================================== 2025-06-02 17:32:19.184013 | orchestrator | Create block VGs -------------------------------------------------------- 5.74s 2025-06-02 17:32:19.184443 | orchestrator | Create block LVs -------------------------------------------------------- 4.05s 2025-06-02 17:32:19.184514 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.84s 2025-06-02 17:32:19.184912 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.55s 2025-06-02 17:32:19.185282 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.55s 2025-06-02 17:32:19.186189 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.54s 2025-06-02 17:32:19.186276 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.52s 2025-06-02 17:32:19.186291 | orchestrator | Add known partitions to the list of available block devices ------------- 1.48s 2025-06-02 17:32:19.186386 | orchestrator | Add known links to the list of available block devices ------------------ 1.35s 2025-06-02 17:32:19.186766 | orchestrator | Add known partitions to the list of available block devices ------------- 1.08s 2025-06-02 17:32:19.187117 | orchestrator | Print LVM report data --------------------------------------------------- 0.97s 2025-06-02 17:32:19.187599 | orchestrator | Add known partitions to the list of available block devices ------------- 0.87s 2025-06-02 17:32:19.187817 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.79s 2025-06-02 17:32:19.188287 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.75s 2025-06-02 17:32:19.188801 | orchestrator | Add known links to the list of available block devices ------------------ 0.74s 2025-06-02 17:32:19.188955 | orchestrator | Print 'Create DB LVs for ceph_db_devices' ------------------------------- 0.71s 2025-06-02 17:32:19.188972 | orchestrator | Count OSDs put on ceph_db_devices defined in lvm_volumes ---------------- 0.70s 2025-06-02 17:32:19.189420 | orchestrator | Get initial list of available block devices ----------------------------- 0.70s 2025-06-02 17:32:19.189634 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.69s 2025-06-02 17:32:19.189946 | orchestrator | Fail if DB LV defined in lvm_volumes is missing ------------------------- 0.69s 2025-06-02 17:32:21.585427 | orchestrator | Registering Redlock._acquired_script 2025-06-02 17:32:21.585512 | orchestrator | Registering Redlock._extend_script 2025-06-02 17:32:21.585564 | orchestrator | Registering Redlock._release_script 2025-06-02 17:32:21.645982 | orchestrator | 2025-06-02 17:32:21 | INFO  | Task 0d03bc3c-c5b1-4af7-ae09-399f0a723b4a (facts) was prepared for execution. 2025-06-02 17:32:21.646140 | orchestrator | 2025-06-02 17:32:21 | INFO  | It takes a moment until task 0d03bc3c-c5b1-4af7-ae09-399f0a723b4a (facts) has been started and output is visible here. 2025-06-02 17:32:25.798992 | orchestrator | 2025-06-02 17:32:25.799106 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-06-02 17:32:25.799121 | orchestrator | 2025-06-02 17:32:25.799199 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-02 17:32:25.800382 | orchestrator | Monday 02 June 2025 17:32:25 +0000 (0:00:00.267) 0:00:00.267 *********** 2025-06-02 17:32:26.939945 | orchestrator | ok: [testbed-manager] 2025-06-02 17:32:26.941944 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:32:26.943180 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:32:26.944935 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:32:26.946457 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:32:26.946886 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:32:26.947507 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:32:26.948659 | orchestrator | 2025-06-02 17:32:26.949125 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-02 17:32:26.949902 | orchestrator | Monday 02 June 2025 17:32:26 +0000 (0:00:01.141) 0:00:01.409 *********** 2025-06-02 17:32:27.104666 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:32:27.187056 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:32:27.309123 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:32:27.388287 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:32:27.484439 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:32:28.237361 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:28.241027 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:28.241062 | orchestrator | 2025-06-02 17:32:28.241378 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-02 17:32:28.242718 | orchestrator | 2025-06-02 17:32:28.243751 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-02 17:32:28.245251 | orchestrator | Monday 02 June 2025 17:32:28 +0000 (0:00:01.301) 0:00:02.711 *********** 2025-06-02 17:32:33.018368 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:32:33.020238 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:32:33.020972 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:32:33.023919 | orchestrator | ok: [testbed-manager] 2025-06-02 17:32:33.023946 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:32:33.023958 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:32:33.024044 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:32:33.025165 | orchestrator | 2025-06-02 17:32:33.026686 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-02 17:32:33.027590 | orchestrator | 2025-06-02 17:32:33.028557 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-02 17:32:33.029634 | orchestrator | Monday 02 June 2025 17:32:33 +0000 (0:00:04.782) 0:00:07.493 *********** 2025-06-02 17:32:33.178811 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:32:33.254171 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:32:33.326932 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:32:33.404922 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:32:33.497853 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:32:33.542569 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:33.542700 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:33.544253 | orchestrator | 2025-06-02 17:32:33.545787 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:32:33.546208 | orchestrator | 2025-06-02 17:32:33 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 17:32:33.546588 | orchestrator | 2025-06-02 17:32:33 | INFO  | Please wait and do not abort execution. 2025-06-02 17:32:33.547464 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:32:33.548129 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:32:33.549090 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:32:33.549200 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:32:33.549914 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:32:33.550925 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:32:33.551854 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:32:33.551987 | orchestrator | 2025-06-02 17:32:33.552133 | orchestrator | 2025-06-02 17:32:33.552428 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:32:33.553235 | orchestrator | Monday 02 June 2025 17:32:33 +0000 (0:00:00.523) 0:00:08.017 *********** 2025-06-02 17:32:33.553506 | orchestrator | =============================================================================== 2025-06-02 17:32:33.554160 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.78s 2025-06-02 17:32:33.554441 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.30s 2025-06-02 17:32:33.554955 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.14s 2025-06-02 17:32:33.555417 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.52s 2025-06-02 17:32:34.249810 | orchestrator | 2025-06-02 17:32:34.253414 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Mon Jun 2 17:32:34 UTC 2025 2025-06-02 17:32:34.253500 | orchestrator | 2025-06-02 17:32:36.018002 | orchestrator | 2025-06-02 17:32:36 | INFO  | Collection nutshell is prepared for execution 2025-06-02 17:32:36.018164 | orchestrator | 2025-06-02 17:32:36 | INFO  | D [0] - dotfiles 2025-06-02 17:32:36.022709 | orchestrator | Registering Redlock._acquired_script 2025-06-02 17:32:36.022742 | orchestrator | Registering Redlock._extend_script 2025-06-02 17:32:36.022754 | orchestrator | Registering Redlock._release_script 2025-06-02 17:32:36.029045 | orchestrator | 2025-06-02 17:32:36 | INFO  | D [0] - homer 2025-06-02 17:32:36.029077 | orchestrator | 2025-06-02 17:32:36 | INFO  | D [0] - netdata 2025-06-02 17:32:36.029089 | orchestrator | 2025-06-02 17:32:36 | INFO  | D [0] - openstackclient 2025-06-02 17:32:36.029155 | orchestrator | 2025-06-02 17:32:36 | INFO  | D [0] - phpmyadmin 2025-06-02 17:32:36.029169 | orchestrator | 2025-06-02 17:32:36 | INFO  | A [0] - common 2025-06-02 17:32:36.031063 | orchestrator | 2025-06-02 17:32:36 | INFO  | A [1] -- loadbalancer 2025-06-02 17:32:36.031138 | orchestrator | 2025-06-02 17:32:36 | INFO  | D [2] --- opensearch 2025-06-02 17:32:36.031151 | orchestrator | 2025-06-02 17:32:36 | INFO  | A [2] --- mariadb-ng 2025-06-02 17:32:36.031161 | orchestrator | 2025-06-02 17:32:36 | INFO  | D [3] ---- horizon 2025-06-02 17:32:36.031261 | orchestrator | 2025-06-02 17:32:36 | INFO  | A [3] ---- keystone 2025-06-02 17:32:36.031277 | orchestrator | 2025-06-02 17:32:36 | INFO  | A [4] ----- neutron 2025-06-02 17:32:36.031289 | orchestrator | 2025-06-02 17:32:36 | INFO  | D [5] ------ wait-for-nova 2025-06-02 17:32:36.031301 | orchestrator | 2025-06-02 17:32:36 | INFO  | A [5] ------ octavia 2025-06-02 17:32:36.032094 | orchestrator | 2025-06-02 17:32:36 | INFO  | D [4] ----- barbican 2025-06-02 17:32:36.032124 | orchestrator | 2025-06-02 17:32:36 | INFO  | D [4] ----- designate 2025-06-02 17:32:36.032138 | orchestrator | 2025-06-02 17:32:36 | INFO  | D [4] ----- ironic 2025-06-02 17:32:36.032152 | orchestrator | 2025-06-02 17:32:36 | INFO  | D [4] ----- placement 2025-06-02 17:32:36.032165 | orchestrator | 2025-06-02 17:32:36 | INFO  | D [4] ----- magnum 2025-06-02 17:32:36.032423 | orchestrator | 2025-06-02 17:32:36 | INFO  | A [1] -- openvswitch 2025-06-02 17:32:36.032443 | orchestrator | 2025-06-02 17:32:36 | INFO  | D [2] --- ovn 2025-06-02 17:32:36.032604 | orchestrator | 2025-06-02 17:32:36 | INFO  | D [1] -- memcached 2025-06-02 17:32:36.032854 | orchestrator | 2025-06-02 17:32:36 | INFO  | D [1] -- redis 2025-06-02 17:32:36.032875 | orchestrator | 2025-06-02 17:32:36 | INFO  | D [1] -- rabbitmq-ng 2025-06-02 17:32:36.032966 | orchestrator | 2025-06-02 17:32:36 | INFO  | A [0] - kubernetes 2025-06-02 17:32:36.035027 | orchestrator | 2025-06-02 17:32:36 | INFO  | D [1] -- kubeconfig 2025-06-02 17:32:36.035113 | orchestrator | 2025-06-02 17:32:36 | INFO  | A [1] -- copy-kubeconfig 2025-06-02 17:32:36.035128 | orchestrator | 2025-06-02 17:32:36 | INFO  | A [0] - ceph 2025-06-02 17:32:36.037919 | orchestrator | 2025-06-02 17:32:36 | INFO  | A [1] -- ceph-pools 2025-06-02 17:32:36.037956 | orchestrator | 2025-06-02 17:32:36 | INFO  | A [2] --- copy-ceph-keys 2025-06-02 17:32:36.037968 | orchestrator | 2025-06-02 17:32:36 | INFO  | A [3] ---- cephclient 2025-06-02 17:32:36.037980 | orchestrator | 2025-06-02 17:32:36 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-06-02 17:32:36.037992 | orchestrator | 2025-06-02 17:32:36 | INFO  | A [4] ----- wait-for-keystone 2025-06-02 17:32:36.038004 | orchestrator | 2025-06-02 17:32:36 | INFO  | D [5] ------ kolla-ceph-rgw 2025-06-02 17:32:36.038060 | orchestrator | 2025-06-02 17:32:36 | INFO  | D [5] ------ glance 2025-06-02 17:32:36.038074 | orchestrator | 2025-06-02 17:32:36 | INFO  | D [5] ------ cinder 2025-06-02 17:32:36.038085 | orchestrator | 2025-06-02 17:32:36 | INFO  | D [5] ------ nova 2025-06-02 17:32:36.038170 | orchestrator | 2025-06-02 17:32:36 | INFO  | A [4] ----- prometheus 2025-06-02 17:32:36.038187 | orchestrator | 2025-06-02 17:32:36 | INFO  | D [5] ------ grafana 2025-06-02 17:32:36.242744 | orchestrator | 2025-06-02 17:32:36 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-06-02 17:32:36.242852 | orchestrator | 2025-06-02 17:32:36 | INFO  | Tasks are running in the background 2025-06-02 17:32:38.811057 | orchestrator | 2025-06-02 17:32:38 | INFO  | No task IDs specified, wait for all currently running tasks 2025-06-02 17:32:40.955858 | orchestrator | 2025-06-02 17:32:40 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:32:40.955998 | orchestrator | 2025-06-02 17:32:40 | INFO  | Task 9a2bb6eb-689a-4a40-b8a3-b071d1eb3fc0 is in state STARTED 2025-06-02 17:32:40.956631 | orchestrator | 2025-06-02 17:32:40 | INFO  | Task 7f24eff2-a6c7-42eb-8bd4-671681259c37 is in state STARTED 2025-06-02 17:32:40.959328 | orchestrator | 2025-06-02 17:32:40 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:32:40.959976 | orchestrator | 2025-06-02 17:32:40 | INFO  | Task 2e3299bf-6f16-46d5-9e55-047774beac2a is in state STARTED 2025-06-02 17:32:40.960672 | orchestrator | 2025-06-02 17:32:40 | INFO  | Task 28318279-8f15-4e2b-827a-7497c74b237f is in state STARTED 2025-06-02 17:32:40.961362 | orchestrator | 2025-06-02 17:32:40 | INFO  | Task 175493ae-31bb-4f31-a797-1eada3a66217 is in state STARTED 2025-06-02 17:32:40.961384 | orchestrator | 2025-06-02 17:32:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:32:43.995165 | orchestrator | 2025-06-02 17:32:43 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:32:43.995335 | orchestrator | 2025-06-02 17:32:43 | INFO  | Task 9a2bb6eb-689a-4a40-b8a3-b071d1eb3fc0 is in state STARTED 2025-06-02 17:32:43.995727 | orchestrator | 2025-06-02 17:32:43 | INFO  | Task 7f24eff2-a6c7-42eb-8bd4-671681259c37 is in state STARTED 2025-06-02 17:32:44.003594 | orchestrator | 2025-06-02 17:32:44 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:32:44.004646 | orchestrator | 2025-06-02 17:32:44 | INFO  | Task 2e3299bf-6f16-46d5-9e55-047774beac2a is in state STARTED 2025-06-02 17:32:44.008225 | orchestrator | 2025-06-02 17:32:44 | INFO  | Task 28318279-8f15-4e2b-827a-7497c74b237f is in state STARTED 2025-06-02 17:32:44.008463 | orchestrator | 2025-06-02 17:32:44 | INFO  | Task 175493ae-31bb-4f31-a797-1eada3a66217 is in state STARTED 2025-06-02 17:32:44.008489 | orchestrator | 2025-06-02 17:32:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:32:47.053554 | orchestrator | 2025-06-02 17:32:47 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:32:47.054120 | orchestrator | 2025-06-02 17:32:47 | INFO  | Task 9a2bb6eb-689a-4a40-b8a3-b071d1eb3fc0 is in state STARTED 2025-06-02 17:32:47.059887 | orchestrator | 2025-06-02 17:32:47 | INFO  | Task 7f24eff2-a6c7-42eb-8bd4-671681259c37 is in state STARTED 2025-06-02 17:32:47.066326 | orchestrator | 2025-06-02 17:32:47 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:32:47.066679 | orchestrator | 2025-06-02 17:32:47 | INFO  | Task 2e3299bf-6f16-46d5-9e55-047774beac2a is in state STARTED 2025-06-02 17:32:47.071865 | orchestrator | 2025-06-02 17:32:47 | INFO  | Task 28318279-8f15-4e2b-827a-7497c74b237f is in state STARTED 2025-06-02 17:32:47.071905 | orchestrator | 2025-06-02 17:32:47 | INFO  | Task 175493ae-31bb-4f31-a797-1eada3a66217 is in state STARTED 2025-06-02 17:32:47.071918 | orchestrator | 2025-06-02 17:32:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:32:50.132464 | orchestrator | 2025-06-02 17:32:50 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:32:50.138333 | orchestrator | 2025-06-02 17:32:50 | INFO  | Task 9a2bb6eb-689a-4a40-b8a3-b071d1eb3fc0 is in state STARTED 2025-06-02 17:32:50.138387 | orchestrator | 2025-06-02 17:32:50 | INFO  | Task 7f24eff2-a6c7-42eb-8bd4-671681259c37 is in state STARTED 2025-06-02 17:32:50.143080 | orchestrator | 2025-06-02 17:32:50 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:32:50.154098 | orchestrator | 2025-06-02 17:32:50 | INFO  | Task 2e3299bf-6f16-46d5-9e55-047774beac2a is in state STARTED 2025-06-02 17:32:50.155086 | orchestrator | 2025-06-02 17:32:50 | INFO  | Task 28318279-8f15-4e2b-827a-7497c74b237f is in state STARTED 2025-06-02 17:32:50.157225 | orchestrator | 2025-06-02 17:32:50 | INFO  | Task 175493ae-31bb-4f31-a797-1eada3a66217 is in state STARTED 2025-06-02 17:32:50.157294 | orchestrator | 2025-06-02 17:32:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:32:53.231728 | orchestrator | 2025-06-02 17:32:53 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:32:53.236011 | orchestrator | 2025-06-02 17:32:53 | INFO  | Task 9a2bb6eb-689a-4a40-b8a3-b071d1eb3fc0 is in state STARTED 2025-06-02 17:32:53.237549 | orchestrator | 2025-06-02 17:32:53 | INFO  | Task 7f24eff2-a6c7-42eb-8bd4-671681259c37 is in state STARTED 2025-06-02 17:32:53.238933 | orchestrator | 2025-06-02 17:32:53 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:32:53.243419 | orchestrator | 2025-06-02 17:32:53 | INFO  | Task 2e3299bf-6f16-46d5-9e55-047774beac2a is in state STARTED 2025-06-02 17:32:53.244800 | orchestrator | 2025-06-02 17:32:53 | INFO  | Task 28318279-8f15-4e2b-827a-7497c74b237f is in state STARTED 2025-06-02 17:32:53.253980 | orchestrator | 2025-06-02 17:32:53 | INFO  | Task 175493ae-31bb-4f31-a797-1eada3a66217 is in state STARTED 2025-06-02 17:32:53.254083 | orchestrator | 2025-06-02 17:32:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:32:56.314991 | orchestrator | 2025-06-02 17:32:56 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:32:56.320076 | orchestrator | 2025-06-02 17:32:56 | INFO  | Task 9a2bb6eb-689a-4a40-b8a3-b071d1eb3fc0 is in state STARTED 2025-06-02 17:32:56.329959 | orchestrator | 2025-06-02 17:32:56 | INFO  | Task 7f24eff2-a6c7-42eb-8bd4-671681259c37 is in state STARTED 2025-06-02 17:32:56.333138 | orchestrator | 2025-06-02 17:32:56 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:32:56.335174 | orchestrator | 2025-06-02 17:32:56 | INFO  | Task 2e3299bf-6f16-46d5-9e55-047774beac2a is in state STARTED 2025-06-02 17:32:56.339979 | orchestrator | 2025-06-02 17:32:56 | INFO  | Task 28318279-8f15-4e2b-827a-7497c74b237f is in state STARTED 2025-06-02 17:32:56.341315 | orchestrator | 2025-06-02 17:32:56 | INFO  | Task 175493ae-31bb-4f31-a797-1eada3a66217 is in state STARTED 2025-06-02 17:32:56.345433 | orchestrator | 2025-06-02 17:32:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:32:59.407331 | orchestrator | 2025-06-02 17:32:59 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:32:59.410501 | orchestrator | 2025-06-02 17:32:59 | INFO  | Task 9a2bb6eb-689a-4a40-b8a3-b071d1eb3fc0 is in state STARTED 2025-06-02 17:32:59.415033 | orchestrator | 2025-06-02 17:32:59 | INFO  | Task 7f24eff2-a6c7-42eb-8bd4-671681259c37 is in state STARTED 2025-06-02 17:32:59.415083 | orchestrator | 2025-06-02 17:32:59 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:32:59.415097 | orchestrator | 2025-06-02 17:32:59 | INFO  | Task 2e3299bf-6f16-46d5-9e55-047774beac2a is in state STARTED 2025-06-02 17:32:59.418432 | orchestrator | 2025-06-02 17:32:59 | INFO  | Task 28318279-8f15-4e2b-827a-7497c74b237f is in state STARTED 2025-06-02 17:32:59.418456 | orchestrator | 2025-06-02 17:32:59 | INFO  | Task 175493ae-31bb-4f31-a797-1eada3a66217 is in state STARTED 2025-06-02 17:32:59.418468 | orchestrator | 2025-06-02 17:32:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:33:02.480857 | orchestrator | 2025-06-02 17:33:02 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:33:02.486808 | orchestrator | 2025-06-02 17:33:02 | INFO  | Task 9a2bb6eb-689a-4a40-b8a3-b071d1eb3fc0 is in state STARTED 2025-06-02 17:33:02.489639 | orchestrator | 2025-06-02 17:33:02 | INFO  | Task 7f24eff2-a6c7-42eb-8bd4-671681259c37 is in state STARTED 2025-06-02 17:33:02.490600 | orchestrator | 2025-06-02 17:33:02 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:33:02.493735 | orchestrator | 2025-06-02 17:33:02 | INFO  | Task 2e3299bf-6f16-46d5-9e55-047774beac2a is in state STARTED 2025-06-02 17:33:02.496615 | orchestrator | 2025-06-02 17:33:02 | INFO  | Task 28318279-8f15-4e2b-827a-7497c74b237f is in state STARTED 2025-06-02 17:33:02.502176 | orchestrator | 2025-06-02 17:33:02 | INFO  | Task 175493ae-31bb-4f31-a797-1eada3a66217 is in state STARTED 2025-06-02 17:33:02.502215 | orchestrator | 2025-06-02 17:33:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:33:05.557934 | orchestrator | 2025-06-02 17:33:05 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:33:05.557998 | orchestrator | 2025-06-02 17:33:05 | INFO  | Task 9a2bb6eb-689a-4a40-b8a3-b071d1eb3fc0 is in state STARTED 2025-06-02 17:33:05.558065 | orchestrator | 2025-06-02 17:33:05 | INFO  | Task 7f24eff2-a6c7-42eb-8bd4-671681259c37 is in state STARTED 2025-06-02 17:33:05.559547 | orchestrator | 2025-06-02 17:33:05 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:33:05.561124 | orchestrator | 2025-06-02 17:33:05 | INFO  | Task 2e3299bf-6f16-46d5-9e55-047774beac2a is in state STARTED 2025-06-02 17:33:05.563059 | orchestrator | 2025-06-02 17:33:05 | INFO  | Task 28318279-8f15-4e2b-827a-7497c74b237f is in state STARTED 2025-06-02 17:33:05.566415 | orchestrator | 2025-06-02 17:33:05 | INFO  | Task 175493ae-31bb-4f31-a797-1eada3a66217 is in state STARTED 2025-06-02 17:33:05.566443 | orchestrator | 2025-06-02 17:33:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:33:08.624043 | orchestrator | 2025-06-02 17:33:08 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:33:08.624202 | orchestrator | 2025-06-02 17:33:08 | INFO  | Task 9a2bb6eb-689a-4a40-b8a3-b071d1eb3fc0 is in state STARTED 2025-06-02 17:33:08.624364 | orchestrator | 2025-06-02 17:33:08 | INFO  | Task 7f24eff2-a6c7-42eb-8bd4-671681259c37 is in state SUCCESS 2025-06-02 17:33:08.625226 | orchestrator | 2025-06-02 17:33:08.625291 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-06-02 17:33:08.625304 | orchestrator | 2025-06-02 17:33:08.625316 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-06-02 17:33:08.625327 | orchestrator | Monday 02 June 2025 17:32:50 +0000 (0:00:01.349) 0:00:01.349 *********** 2025-06-02 17:33:08.625339 | orchestrator | changed: [testbed-manager] 2025-06-02 17:33:08.625351 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:33:08.625362 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:33:08.625372 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:33:08.625383 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:33:08.625394 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:33:08.625404 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:33:08.625415 | orchestrator | 2025-06-02 17:33:08.625426 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-06-02 17:33:08.625438 | orchestrator | Monday 02 June 2025 17:32:54 +0000 (0:00:04.070) 0:00:05.420 *********** 2025-06-02 17:33:08.625450 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-06-02 17:33:08.625468 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-06-02 17:33:08.625480 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-06-02 17:33:08.625491 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-06-02 17:33:08.625549 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-06-02 17:33:08.625562 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-06-02 17:33:08.625573 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-06-02 17:33:08.625583 | orchestrator | 2025-06-02 17:33:08.625594 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-06-02 17:33:08.625605 | orchestrator | Monday 02 June 2025 17:32:56 +0000 (0:00:01.841) 0:00:07.261 *********** 2025-06-02 17:33:08.625620 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-02 17:32:55.306859', 'end': '2025-06-02 17:32:55.315671', 'delta': '0:00:00.008812', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-02 17:33:08.625635 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-02 17:32:55.411289', 'end': '2025-06-02 17:32:55.421688', 'delta': '0:00:00.010399', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-02 17:33:08.625647 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-02 17:32:55.363418', 'end': '2025-06-02 17:32:55.370584', 'delta': '0:00:00.007166', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-02 17:33:08.625673 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-02 17:32:55.455489', 'end': '2025-06-02 17:32:55.464103', 'delta': '0:00:00.008614', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-02 17:33:08.625695 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-02 17:32:55.503221', 'end': '2025-06-02 17:32:55.511795', 'delta': '0:00:00.008574', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-02 17:33:08.625731 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-02 17:32:55.789542', 'end': '2025-06-02 17:32:55.795820', 'delta': '0:00:00.006278', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-02 17:33:08.625751 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-02 17:32:55.922163', 'end': '2025-06-02 17:32:55.929358', 'delta': '0:00:00.007195', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-02 17:33:08.625771 | orchestrator | 2025-06-02 17:33:08.625790 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-06-02 17:33:08.625869 | orchestrator | Monday 02 June 2025 17:33:00 +0000 (0:00:03.607) 0:00:10.869 *********** 2025-06-02 17:33:08.625883 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-06-02 17:33:08.625896 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-06-02 17:33:08.625908 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-06-02 17:33:08.625920 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-06-02 17:33:08.625932 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-06-02 17:33:08.625945 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-06-02 17:33:08.625958 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-06-02 17:33:08.625970 | orchestrator | 2025-06-02 17:33:08.625982 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-06-02 17:33:08.625994 | orchestrator | Monday 02 June 2025 17:33:02 +0000 (0:00:02.051) 0:00:12.921 *********** 2025-06-02 17:33:08.626006 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-06-02 17:33:08.626121 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-06-02 17:33:08.626136 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-06-02 17:33:08.626149 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-06-02 17:33:08.626162 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-06-02 17:33:08.626175 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-06-02 17:33:08.626187 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-06-02 17:33:08.626209 | orchestrator | 2025-06-02 17:33:08.626221 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:33:08.626244 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:33:08.626258 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:33:08.626269 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:33:08.626280 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:33:08.626291 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:33:08.626302 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:33:08.626312 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:33:08.626323 | orchestrator | 2025-06-02 17:33:08.626334 | orchestrator | 2025-06-02 17:33:08.626345 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:33:08.626356 | orchestrator | Monday 02 June 2025 17:33:05 +0000 (0:00:03.507) 0:00:16.428 *********** 2025-06-02 17:33:08.626367 | orchestrator | =============================================================================== 2025-06-02 17:33:08.626378 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.07s 2025-06-02 17:33:08.626716 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 3.61s 2025-06-02 17:33:08.626742 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.51s 2025-06-02 17:33:08.626761 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.05s 2025-06-02 17:33:08.626778 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.84s 2025-06-02 17:33:08.626804 | orchestrator | 2025-06-02 17:33:08 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:33:08.633045 | orchestrator | 2025-06-02 17:33:08 | INFO  | Task 2e3299bf-6f16-46d5-9e55-047774beac2a is in state STARTED 2025-06-02 17:33:08.633720 | orchestrator | 2025-06-02 17:33:08 | INFO  | Task 28318279-8f15-4e2b-827a-7497c74b237f is in state STARTED 2025-06-02 17:33:08.635490 | orchestrator | 2025-06-02 17:33:08 | INFO  | Task 175493ae-31bb-4f31-a797-1eada3a66217 is in state STARTED 2025-06-02 17:33:08.641714 | orchestrator | 2025-06-02 17:33:08 | INFO  | Task 0a4e976d-ec0f-4334-a0ff-31e4f891333f is in state STARTED 2025-06-02 17:33:08.641761 | orchestrator | 2025-06-02 17:33:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:33:11.684038 | orchestrator | 2025-06-02 17:33:11 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:33:11.684167 | orchestrator | 2025-06-02 17:33:11 | INFO  | Task 9a2bb6eb-689a-4a40-b8a3-b071d1eb3fc0 is in state STARTED 2025-06-02 17:33:11.684190 | orchestrator | 2025-06-02 17:33:11 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:33:11.684208 | orchestrator | 2025-06-02 17:33:11 | INFO  | Task 2e3299bf-6f16-46d5-9e55-047774beac2a is in state STARTED 2025-06-02 17:33:11.687242 | orchestrator | 2025-06-02 17:33:11 | INFO  | Task 28318279-8f15-4e2b-827a-7497c74b237f is in state STARTED 2025-06-02 17:33:11.687873 | orchestrator | 2025-06-02 17:33:11 | INFO  | Task 175493ae-31bb-4f31-a797-1eada3a66217 is in state STARTED 2025-06-02 17:33:11.688273 | orchestrator | 2025-06-02 17:33:11 | INFO  | Task 0a4e976d-ec0f-4334-a0ff-31e4f891333f is in state STARTED 2025-06-02 17:33:11.688292 | orchestrator | 2025-06-02 17:33:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:33:14.723797 | orchestrator | 2025-06-02 17:33:14 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:33:14.726745 | orchestrator | 2025-06-02 17:33:14 | INFO  | Task 9a2bb6eb-689a-4a40-b8a3-b071d1eb3fc0 is in state STARTED 2025-06-02 17:33:14.729785 | orchestrator | 2025-06-02 17:33:14 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:33:14.729854 | orchestrator | 2025-06-02 17:33:14 | INFO  | Task 2e3299bf-6f16-46d5-9e55-047774beac2a is in state STARTED 2025-06-02 17:33:14.729877 | orchestrator | 2025-06-02 17:33:14 | INFO  | Task 28318279-8f15-4e2b-827a-7497c74b237f is in state STARTED 2025-06-02 17:33:14.731641 | orchestrator | 2025-06-02 17:33:14 | INFO  | Task 175493ae-31bb-4f31-a797-1eada3a66217 is in state STARTED 2025-06-02 17:33:14.731706 | orchestrator | 2025-06-02 17:33:14 | INFO  | Task 0a4e976d-ec0f-4334-a0ff-31e4f891333f is in state STARTED 2025-06-02 17:33:14.731727 | orchestrator | 2025-06-02 17:33:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:33:17.793215 | orchestrator | 2025-06-02 17:33:17 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:33:17.793407 | orchestrator | 2025-06-02 17:33:17 | INFO  | Task 9a2bb6eb-689a-4a40-b8a3-b071d1eb3fc0 is in state STARTED 2025-06-02 17:33:17.801004 | orchestrator | 2025-06-02 17:33:17 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:33:17.801100 | orchestrator | 2025-06-02 17:33:17 | INFO  | Task 2e3299bf-6f16-46d5-9e55-047774beac2a is in state STARTED 2025-06-02 17:33:17.801116 | orchestrator | 2025-06-02 17:33:17 | INFO  | Task 28318279-8f15-4e2b-827a-7497c74b237f is in state STARTED 2025-06-02 17:33:17.801129 | orchestrator | 2025-06-02 17:33:17 | INFO  | Task 175493ae-31bb-4f31-a797-1eada3a66217 is in state STARTED 2025-06-02 17:33:17.801141 | orchestrator | 2025-06-02 17:33:17 | INFO  | Task 0a4e976d-ec0f-4334-a0ff-31e4f891333f is in state STARTED 2025-06-02 17:33:17.801152 | orchestrator | 2025-06-02 17:33:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:33:20.856484 | orchestrator | 2025-06-02 17:33:20 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:33:20.856652 | orchestrator | 2025-06-02 17:33:20 | INFO  | Task 9a2bb6eb-689a-4a40-b8a3-b071d1eb3fc0 is in state STARTED 2025-06-02 17:33:20.856668 | orchestrator | 2025-06-02 17:33:20 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:33:20.856681 | orchestrator | 2025-06-02 17:33:20 | INFO  | Task 2e3299bf-6f16-46d5-9e55-047774beac2a is in state STARTED 2025-06-02 17:33:20.857719 | orchestrator | 2025-06-02 17:33:20 | INFO  | Task 28318279-8f15-4e2b-827a-7497c74b237f is in state STARTED 2025-06-02 17:33:20.857742 | orchestrator | 2025-06-02 17:33:20 | INFO  | Task 175493ae-31bb-4f31-a797-1eada3a66217 is in state STARTED 2025-06-02 17:33:20.861701 | orchestrator | 2025-06-02 17:33:20 | INFO  | Task 0a4e976d-ec0f-4334-a0ff-31e4f891333f is in state STARTED 2025-06-02 17:33:20.861792 | orchestrator | 2025-06-02 17:33:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:33:23.907080 | orchestrator | 2025-06-02 17:33:23 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:33:23.911718 | orchestrator | 2025-06-02 17:33:23 | INFO  | Task 9a2bb6eb-689a-4a40-b8a3-b071d1eb3fc0 is in state SUCCESS 2025-06-02 17:33:23.915908 | orchestrator | 2025-06-02 17:33:23 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:33:23.920909 | orchestrator | 2025-06-02 17:33:23 | INFO  | Task 2e3299bf-6f16-46d5-9e55-047774beac2a is in state STARTED 2025-06-02 17:33:23.924685 | orchestrator | 2025-06-02 17:33:23 | INFO  | Task 28318279-8f15-4e2b-827a-7497c74b237f is in state STARTED 2025-06-02 17:33:23.926843 | orchestrator | 2025-06-02 17:33:23 | INFO  | Task 175493ae-31bb-4f31-a797-1eada3a66217 is in state STARTED 2025-06-02 17:33:23.929090 | orchestrator | 2025-06-02 17:33:23 | INFO  | Task 0a4e976d-ec0f-4334-a0ff-31e4f891333f is in state STARTED 2025-06-02 17:33:23.929123 | orchestrator | 2025-06-02 17:33:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:33:26.993928 | orchestrator | 2025-06-02 17:33:26 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:33:26.995950 | orchestrator | 2025-06-02 17:33:26 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:33:27.004632 | orchestrator | 2025-06-02 17:33:27 | INFO  | Task 2e3299bf-6f16-46d5-9e55-047774beac2a is in state STARTED 2025-06-02 17:33:27.004748 | orchestrator | 2025-06-02 17:33:27 | INFO  | Task 28318279-8f15-4e2b-827a-7497c74b237f is in state STARTED 2025-06-02 17:33:27.006114 | orchestrator | 2025-06-02 17:33:27 | INFO  | Task 175493ae-31bb-4f31-a797-1eada3a66217 is in state STARTED 2025-06-02 17:33:27.006146 | orchestrator | 2025-06-02 17:33:27 | INFO  | Task 0a4e976d-ec0f-4334-a0ff-31e4f891333f is in state STARTED 2025-06-02 17:33:27.006158 | orchestrator | 2025-06-02 17:33:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:33:30.061460 | orchestrator | 2025-06-02 17:33:30 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:33:30.062485 | orchestrator | 2025-06-02 17:33:30 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:33:30.063474 | orchestrator | 2025-06-02 17:33:30 | INFO  | Task 2e3299bf-6f16-46d5-9e55-047774beac2a is in state STARTED 2025-06-02 17:33:30.065110 | orchestrator | 2025-06-02 17:33:30 | INFO  | Task 28318279-8f15-4e2b-827a-7497c74b237f is in state STARTED 2025-06-02 17:33:30.066995 | orchestrator | 2025-06-02 17:33:30 | INFO  | Task 175493ae-31bb-4f31-a797-1eada3a66217 is in state STARTED 2025-06-02 17:33:30.068921 | orchestrator | 2025-06-02 17:33:30 | INFO  | Task 0a4e976d-ec0f-4334-a0ff-31e4f891333f is in state STARTED 2025-06-02 17:33:30.069290 | orchestrator | 2025-06-02 17:33:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:33:33.123159 | orchestrator | 2025-06-02 17:33:33 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:33:33.124982 | orchestrator | 2025-06-02 17:33:33 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:33:33.126698 | orchestrator | 2025-06-02 17:33:33 | INFO  | Task 2e3299bf-6f16-46d5-9e55-047774beac2a is in state STARTED 2025-06-02 17:33:33.130118 | orchestrator | 2025-06-02 17:33:33 | INFO  | Task 28318279-8f15-4e2b-827a-7497c74b237f is in state STARTED 2025-06-02 17:33:33.135008 | orchestrator | 2025-06-02 17:33:33 | INFO  | Task 175493ae-31bb-4f31-a797-1eada3a66217 is in state STARTED 2025-06-02 17:33:33.136451 | orchestrator | 2025-06-02 17:33:33 | INFO  | Task 0a4e976d-ec0f-4334-a0ff-31e4f891333f is in state STARTED 2025-06-02 17:33:33.137280 | orchestrator | 2025-06-02 17:33:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:33:36.176066 | orchestrator | 2025-06-02 17:33:36 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:33:36.176915 | orchestrator | 2025-06-02 17:33:36 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:33:36.178713 | orchestrator | 2025-06-02 17:33:36 | INFO  | Task 2e3299bf-6f16-46d5-9e55-047774beac2a is in state STARTED 2025-06-02 17:33:36.182179 | orchestrator | 2025-06-02 17:33:36 | INFO  | Task 28318279-8f15-4e2b-827a-7497c74b237f is in state STARTED 2025-06-02 17:33:36.183620 | orchestrator | 2025-06-02 17:33:36 | INFO  | Task 175493ae-31bb-4f31-a797-1eada3a66217 is in state STARTED 2025-06-02 17:33:36.185161 | orchestrator | 2025-06-02 17:33:36 | INFO  | Task 0a4e976d-ec0f-4334-a0ff-31e4f891333f is in state STARTED 2025-06-02 17:33:36.185196 | orchestrator | 2025-06-02 17:33:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:33:39.229484 | orchestrator | 2025-06-02 17:33:39 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:33:39.230506 | orchestrator | 2025-06-02 17:33:39 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:33:39.230621 | orchestrator | 2025-06-02 17:33:39 | INFO  | Task 2e3299bf-6f16-46d5-9e55-047774beac2a is in state SUCCESS 2025-06-02 17:33:39.231256 | orchestrator | 2025-06-02 17:33:39 | INFO  | Task 28318279-8f15-4e2b-827a-7497c74b237f is in state STARTED 2025-06-02 17:33:39.231881 | orchestrator | 2025-06-02 17:33:39 | INFO  | Task 175493ae-31bb-4f31-a797-1eada3a66217 is in state STARTED 2025-06-02 17:33:39.232349 | orchestrator | 2025-06-02 17:33:39 | INFO  | Task 0a4e976d-ec0f-4334-a0ff-31e4f891333f is in state STARTED 2025-06-02 17:33:39.232371 | orchestrator | 2025-06-02 17:33:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:33:42.278307 | orchestrator | 2025-06-02 17:33:42 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:33:42.279318 | orchestrator | 2025-06-02 17:33:42 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:33:42.281913 | orchestrator | 2025-06-02 17:33:42 | INFO  | Task 28318279-8f15-4e2b-827a-7497c74b237f is in state STARTED 2025-06-02 17:33:42.282840 | orchestrator | 2025-06-02 17:33:42 | INFO  | Task 175493ae-31bb-4f31-a797-1eada3a66217 is in state STARTED 2025-06-02 17:33:42.283822 | orchestrator | 2025-06-02 17:33:42 | INFO  | Task 0a4e976d-ec0f-4334-a0ff-31e4f891333f is in state STARTED 2025-06-02 17:33:42.283848 | orchestrator | 2025-06-02 17:33:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:33:45.343512 | orchestrator | 2025-06-02 17:33:45 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:33:45.349154 | orchestrator | 2025-06-02 17:33:45 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:33:45.352312 | orchestrator | 2025-06-02 17:33:45 | INFO  | Task 28318279-8f15-4e2b-827a-7497c74b237f is in state STARTED 2025-06-02 17:33:45.355135 | orchestrator | 2025-06-02 17:33:45 | INFO  | Task 175493ae-31bb-4f31-a797-1eada3a66217 is in state STARTED 2025-06-02 17:33:45.362645 | orchestrator | 2025-06-02 17:33:45 | INFO  | Task 0a4e976d-ec0f-4334-a0ff-31e4f891333f is in state STARTED 2025-06-02 17:33:45.362727 | orchestrator | 2025-06-02 17:33:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:33:48.426346 | orchestrator | 2025-06-02 17:33:48 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:33:48.433374 | orchestrator | 2025-06-02 17:33:48 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:33:48.437310 | orchestrator | 2025-06-02 17:33:48 | INFO  | Task 28318279-8f15-4e2b-827a-7497c74b237f is in state STARTED 2025-06-02 17:33:48.437399 | orchestrator | 2025-06-02 17:33:48 | INFO  | Task 175493ae-31bb-4f31-a797-1eada3a66217 is in state STARTED 2025-06-02 17:33:48.437463 | orchestrator | 2025-06-02 17:33:48 | INFO  | Task 0a4e976d-ec0f-4334-a0ff-31e4f891333f is in state STARTED 2025-06-02 17:33:48.437486 | orchestrator | 2025-06-02 17:33:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:33:51.492743 | orchestrator | 2025-06-02 17:33:51 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:33:51.494259 | orchestrator | 2025-06-02 17:33:51 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:33:51.496800 | orchestrator | 2025-06-02 17:33:51.496845 | orchestrator | 2025-06-02 17:33:51.496855 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-06-02 17:33:51.496865 | orchestrator | 2025-06-02 17:33:51.496875 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-06-02 17:33:51.496885 | orchestrator | Monday 02 June 2025 17:32:49 +0000 (0:00:00.773) 0:00:00.773 *********** 2025-06-02 17:33:51.496895 | orchestrator | ok: [testbed-manager] => { 2025-06-02 17:33:51.496908 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-06-02 17:33:51.496920 | orchestrator | } 2025-06-02 17:33:51.496930 | orchestrator | 2025-06-02 17:33:51.496940 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-06-02 17:33:51.496948 | orchestrator | Monday 02 June 2025 17:32:49 +0000 (0:00:00.257) 0:00:01.031 *********** 2025-06-02 17:33:51.496957 | orchestrator | ok: [testbed-manager] 2025-06-02 17:33:51.496966 | orchestrator | 2025-06-02 17:33:51.496974 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-06-02 17:33:51.496983 | orchestrator | Monday 02 June 2025 17:32:51 +0000 (0:00:02.205) 0:00:03.237 *********** 2025-06-02 17:33:51.496992 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-06-02 17:33:51.497001 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-06-02 17:33:51.497010 | orchestrator | 2025-06-02 17:33:51.497018 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-06-02 17:33:51.497027 | orchestrator | Monday 02 June 2025 17:32:52 +0000 (0:00:01.084) 0:00:04.322 *********** 2025-06-02 17:33:51.497037 | orchestrator | changed: [testbed-manager] 2025-06-02 17:33:51.497046 | orchestrator | 2025-06-02 17:33:51.497055 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-06-02 17:33:51.497064 | orchestrator | Monday 02 June 2025 17:32:54 +0000 (0:00:02.084) 0:00:06.406 *********** 2025-06-02 17:33:51.497072 | orchestrator | changed: [testbed-manager] 2025-06-02 17:33:51.497081 | orchestrator | 2025-06-02 17:33:51.497089 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-06-02 17:33:51.497098 | orchestrator | Monday 02 June 2025 17:32:57 +0000 (0:00:02.081) 0:00:08.488 *********** 2025-06-02 17:33:51.497107 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-06-02 17:33:51.497117 | orchestrator | ok: [testbed-manager] 2025-06-02 17:33:51.497127 | orchestrator | 2025-06-02 17:33:51.497135 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-06-02 17:33:51.497144 | orchestrator | Monday 02 June 2025 17:33:21 +0000 (0:00:24.182) 0:00:32.670 *********** 2025-06-02 17:33:51.497154 | orchestrator | changed: [testbed-manager] 2025-06-02 17:33:51.497163 | orchestrator | 2025-06-02 17:33:51.497173 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:33:51.497184 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:33:51.497195 | orchestrator | 2025-06-02 17:33:51.497204 | orchestrator | 2025-06-02 17:33:51.497214 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:33:51.497224 | orchestrator | Monday 02 June 2025 17:33:22 +0000 (0:00:01.751) 0:00:34.422 *********** 2025-06-02 17:33:51.497254 | orchestrator | =============================================================================== 2025-06-02 17:33:51.497265 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 24.18s 2025-06-02 17:33:51.497275 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.21s 2025-06-02 17:33:51.497285 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.08s 2025-06-02 17:33:51.497295 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.08s 2025-06-02 17:33:51.497305 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 1.75s 2025-06-02 17:33:51.497315 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.08s 2025-06-02 17:33:51.497325 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.26s 2025-06-02 17:33:51.497334 | orchestrator | 2025-06-02 17:33:51.497343 | orchestrator | 2025-06-02 17:33:51.497352 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-06-02 17:33:51.497361 | orchestrator | 2025-06-02 17:33:51.497371 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-06-02 17:33:51.497381 | orchestrator | Monday 02 June 2025 17:32:48 +0000 (0:00:00.630) 0:00:00.630 *********** 2025-06-02 17:33:51.497398 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-06-02 17:33:51.497410 | orchestrator | 2025-06-02 17:33:51.497420 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-06-02 17:33:51.497430 | orchestrator | Monday 02 June 2025 17:32:49 +0000 (0:00:00.677) 0:00:01.308 *********** 2025-06-02 17:33:51.497440 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-06-02 17:33:51.497451 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-06-02 17:33:51.497461 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-06-02 17:33:51.497471 | orchestrator | 2025-06-02 17:33:51.497479 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-06-02 17:33:51.497488 | orchestrator | Monday 02 June 2025 17:32:52 +0000 (0:00:02.604) 0:00:03.912 *********** 2025-06-02 17:33:51.497497 | orchestrator | changed: [testbed-manager] 2025-06-02 17:33:51.497506 | orchestrator | 2025-06-02 17:33:51.497514 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-06-02 17:33:51.497570 | orchestrator | Monday 02 June 2025 17:32:53 +0000 (0:00:01.495) 0:00:05.407 *********** 2025-06-02 17:33:51.497596 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-06-02 17:33:51.497605 | orchestrator | ok: [testbed-manager] 2025-06-02 17:33:51.497614 | orchestrator | 2025-06-02 17:33:51.497623 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-06-02 17:33:51.497632 | orchestrator | Monday 02 June 2025 17:33:30 +0000 (0:00:36.906) 0:00:42.314 *********** 2025-06-02 17:33:51.497641 | orchestrator | changed: [testbed-manager] 2025-06-02 17:33:51.497649 | orchestrator | 2025-06-02 17:33:51.497659 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-06-02 17:33:51.497668 | orchestrator | Monday 02 June 2025 17:33:32 +0000 (0:00:01.756) 0:00:44.070 *********** 2025-06-02 17:33:51.497677 | orchestrator | ok: [testbed-manager] 2025-06-02 17:33:51.497686 | orchestrator | 2025-06-02 17:33:51.497695 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-06-02 17:33:51.497701 | orchestrator | Monday 02 June 2025 17:33:34 +0000 (0:00:01.768) 0:00:45.839 *********** 2025-06-02 17:33:51.497706 | orchestrator | changed: [testbed-manager] 2025-06-02 17:33:51.497711 | orchestrator | 2025-06-02 17:33:51.497716 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-06-02 17:33:51.497722 | orchestrator | Monday 02 June 2025 17:33:35 +0000 (0:00:01.894) 0:00:47.734 *********** 2025-06-02 17:33:51.497727 | orchestrator | changed: [testbed-manager] 2025-06-02 17:33:51.497741 | orchestrator | 2025-06-02 17:33:51.497747 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-06-02 17:33:51.497752 | orchestrator | Monday 02 June 2025 17:33:36 +0000 (0:00:00.730) 0:00:48.465 *********** 2025-06-02 17:33:51.497757 | orchestrator | changed: [testbed-manager] 2025-06-02 17:33:51.497763 | orchestrator | 2025-06-02 17:33:51.497768 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-06-02 17:33:51.497773 | orchestrator | Monday 02 June 2025 17:33:37 +0000 (0:00:00.572) 0:00:49.037 *********** 2025-06-02 17:33:51.497779 | orchestrator | ok: [testbed-manager] 2025-06-02 17:33:51.497784 | orchestrator | 2025-06-02 17:33:51.497790 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:33:51.497795 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:33:51.497800 | orchestrator | 2025-06-02 17:33:51.497806 | orchestrator | 2025-06-02 17:33:51.497811 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:33:51.497816 | orchestrator | Monday 02 June 2025 17:33:37 +0000 (0:00:00.435) 0:00:49.473 *********** 2025-06-02 17:33:51.497822 | orchestrator | =============================================================================== 2025-06-02 17:33:51.497827 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 36.91s 2025-06-02 17:33:51.497832 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.60s 2025-06-02 17:33:51.497838 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.89s 2025-06-02 17:33:51.497843 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.77s 2025-06-02 17:33:51.497848 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.76s 2025-06-02 17:33:51.497854 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.50s 2025-06-02 17:33:51.497859 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.73s 2025-06-02 17:33:51.497864 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.68s 2025-06-02 17:33:51.497870 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.57s 2025-06-02 17:33:51.497875 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.44s 2025-06-02 17:33:51.497880 | orchestrator | 2025-06-02 17:33:51.497886 | orchestrator | 2025-06-02 17:33:51.497891 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 17:33:51.497896 | orchestrator | 2025-06-02 17:33:51.497902 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 17:33:51.497907 | orchestrator | Monday 02 June 2025 17:32:49 +0000 (0:00:00.626) 0:00:00.626 *********** 2025-06-02 17:33:51.497912 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-06-02 17:33:51.497918 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-06-02 17:33:51.497923 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-06-02 17:33:51.497928 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-06-02 17:33:51.497933 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-06-02 17:33:51.497939 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-06-02 17:33:51.497944 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-06-02 17:33:51.497948 | orchestrator | 2025-06-02 17:33:51.497953 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-06-02 17:33:51.497958 | orchestrator | 2025-06-02 17:33:51.497963 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-06-02 17:33:51.497968 | orchestrator | Monday 02 June 2025 17:32:51 +0000 (0:00:01.883) 0:00:02.509 *********** 2025-06-02 17:33:51.497982 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:33:51.497996 | orchestrator | 2025-06-02 17:33:51.498001 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-06-02 17:33:51.498006 | orchestrator | Monday 02 June 2025 17:32:53 +0000 (0:00:01.973) 0:00:04.483 *********** 2025-06-02 17:33:51.498011 | orchestrator | ok: [testbed-manager] 2025-06-02 17:33:51.498072 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:33:51.498078 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:33:51.498082 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:33:51.498087 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:33:51.498098 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:33:51.498103 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:33:51.498108 | orchestrator | 2025-06-02 17:33:51.498113 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-06-02 17:33:51.498118 | orchestrator | Monday 02 June 2025 17:32:55 +0000 (0:00:01.825) 0:00:06.308 *********** 2025-06-02 17:33:51.498123 | orchestrator | ok: [testbed-manager] 2025-06-02 17:33:51.498127 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:33:51.498132 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:33:51.498137 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:33:51.498141 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:33:51.498146 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:33:51.498151 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:33:51.498155 | orchestrator | 2025-06-02 17:33:51.498160 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-06-02 17:33:51.498165 | orchestrator | Monday 02 June 2025 17:32:59 +0000 (0:00:03.538) 0:00:09.847 *********** 2025-06-02 17:33:51.498170 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:33:51.498175 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:33:51.498180 | orchestrator | changed: [testbed-manager] 2025-06-02 17:33:51.498184 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:33:51.498189 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:33:51.498194 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:33:51.498198 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:33:51.498203 | orchestrator | 2025-06-02 17:33:51.498208 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-06-02 17:33:51.498213 | orchestrator | Monday 02 June 2025 17:33:02 +0000 (0:00:02.978) 0:00:12.825 *********** 2025-06-02 17:33:51.498218 | orchestrator | changed: [testbed-manager] 2025-06-02 17:33:51.498222 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:33:51.498227 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:33:51.498232 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:33:51.498236 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:33:51.498241 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:33:51.498246 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:33:51.498250 | orchestrator | 2025-06-02 17:33:51.498255 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-06-02 17:33:51.498260 | orchestrator | Monday 02 June 2025 17:33:11 +0000 (0:00:09.749) 0:00:22.575 *********** 2025-06-02 17:33:51.498265 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:33:51.498269 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:33:51.498274 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:33:51.498279 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:33:51.498284 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:33:51.498289 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:33:51.498293 | orchestrator | changed: [testbed-manager] 2025-06-02 17:33:51.498298 | orchestrator | 2025-06-02 17:33:51.498303 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-06-02 17:33:51.498308 | orchestrator | Monday 02 June 2025 17:33:28 +0000 (0:00:16.246) 0:00:38.821 *********** 2025-06-02 17:33:51.498314 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:33:51.498325 | orchestrator | 2025-06-02 17:33:51.498330 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-06-02 17:33:51.498335 | orchestrator | Monday 02 June 2025 17:33:29 +0000 (0:00:01.860) 0:00:40.682 *********** 2025-06-02 17:33:51.498339 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-06-02 17:33:51.498344 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-06-02 17:33:51.498349 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-06-02 17:33:51.498354 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-06-02 17:33:51.498359 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-06-02 17:33:51.498363 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-06-02 17:33:51.498368 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-06-02 17:33:51.498373 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-06-02 17:33:51.498378 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-06-02 17:33:51.498382 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-06-02 17:33:51.498387 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-06-02 17:33:51.498392 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-06-02 17:33:51.498423 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-06-02 17:33:51.498428 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-06-02 17:33:51.498433 | orchestrator | 2025-06-02 17:33:51.498438 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-06-02 17:33:51.498446 | orchestrator | Monday 02 June 2025 17:33:35 +0000 (0:00:05.695) 0:00:46.377 *********** 2025-06-02 17:33:51.498451 | orchestrator | ok: [testbed-manager] 2025-06-02 17:33:51.498456 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:33:51.498461 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:33:51.498465 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:33:51.498470 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:33:51.498475 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:33:51.498479 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:33:51.498484 | orchestrator | 2025-06-02 17:33:51.498489 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-06-02 17:33:51.498494 | orchestrator | Monday 02 June 2025 17:33:36 +0000 (0:00:01.325) 0:00:47.703 *********** 2025-06-02 17:33:51.498498 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:33:51.498503 | orchestrator | changed: [testbed-manager] 2025-06-02 17:33:51.498508 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:33:51.498512 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:33:51.498517 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:33:51.498522 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:33:51.498542 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:33:51.498547 | orchestrator | 2025-06-02 17:33:51.498552 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-06-02 17:33:51.498562 | orchestrator | Monday 02 June 2025 17:33:38 +0000 (0:00:01.654) 0:00:49.358 *********** 2025-06-02 17:33:51.498567 | orchestrator | ok: [testbed-manager] 2025-06-02 17:33:51.498572 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:33:51.498576 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:33:51.498581 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:33:51.498586 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:33:51.498591 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:33:51.498595 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:33:51.498600 | orchestrator | 2025-06-02 17:33:51.498605 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-06-02 17:33:51.498610 | orchestrator | Monday 02 June 2025 17:33:40 +0000 (0:00:01.616) 0:00:50.974 *********** 2025-06-02 17:33:51.498614 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:33:51.498619 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:33:51.498624 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:33:51.498628 | orchestrator | ok: [testbed-manager] 2025-06-02 17:33:51.498638 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:33:51.498642 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:33:51.498647 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:33:51.498652 | orchestrator | 2025-06-02 17:33:51.498657 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-06-02 17:33:51.498661 | orchestrator | Monday 02 June 2025 17:33:42 +0000 (0:00:02.024) 0:00:52.998 *********** 2025-06-02 17:33:51.498666 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-06-02 17:33:51.498673 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:33:51.498678 | orchestrator | 2025-06-02 17:33:51.498683 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-06-02 17:33:51.498687 | orchestrator | Monday 02 June 2025 17:33:43 +0000 (0:00:01.353) 0:00:54.351 *********** 2025-06-02 17:33:51.498692 | orchestrator | changed: [testbed-manager] 2025-06-02 17:33:51.498697 | orchestrator | 2025-06-02 17:33:51.498702 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-06-02 17:33:51.498706 | orchestrator | Monday 02 June 2025 17:33:46 +0000 (0:00:02.626) 0:00:56.978 *********** 2025-06-02 17:33:51.498711 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:33:51.498716 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:33:51.498720 | orchestrator | changed: [testbed-manager] 2025-06-02 17:33:51.498725 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:33:51.498730 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:33:51.498734 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:33:51.498739 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:33:51.498744 | orchestrator | 2025-06-02 17:33:51.498749 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:33:51.498753 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:33:51.498758 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:33:51.498763 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:33:51.498768 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:33:51.498773 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:33:51.498778 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:33:51.498782 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:33:51.498787 | orchestrator | 2025-06-02 17:33:51.498792 | orchestrator | 2025-06-02 17:33:51.498797 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:33:51.498801 | orchestrator | Monday 02 June 2025 17:33:49 +0000 (0:00:03.497) 0:01:00.475 *********** 2025-06-02 17:33:51.498806 | orchestrator | =============================================================================== 2025-06-02 17:33:51.498811 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 16.25s 2025-06-02 17:33:51.498818 | orchestrator | osism.services.netdata : Add repository --------------------------------- 9.75s 2025-06-02 17:33:51.498823 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 5.70s 2025-06-02 17:33:51.498828 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.54s 2025-06-02 17:33:51.498837 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.50s 2025-06-02 17:33:51.498841 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.98s 2025-06-02 17:33:51.498846 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.63s 2025-06-02 17:33:51.498851 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.02s 2025-06-02 17:33:51.498856 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.97s 2025-06-02 17:33:51.498860 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.88s 2025-06-02 17:33:51.498865 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.86s 2025-06-02 17:33:51.498873 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.83s 2025-06-02 17:33:51.498878 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.65s 2025-06-02 17:33:51.498883 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.62s 2025-06-02 17:33:51.498887 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.35s 2025-06-02 17:33:51.498892 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.33s 2025-06-02 17:33:51.498897 | orchestrator | 2025-06-02 17:33:51 | INFO  | Task 28318279-8f15-4e2b-827a-7497c74b237f is in state SUCCESS 2025-06-02 17:33:51.498902 | orchestrator | 2025-06-02 17:33:51 | INFO  | Task 175493ae-31bb-4f31-a797-1eada3a66217 is in state STARTED 2025-06-02 17:33:51.498909 | orchestrator | 2025-06-02 17:33:51 | INFO  | Task 0a4e976d-ec0f-4334-a0ff-31e4f891333f is in state STARTED 2025-06-02 17:33:51.500727 | orchestrator | 2025-06-02 17:33:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:33:54.555156 | orchestrator | 2025-06-02 17:33:54 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:33:54.556607 | orchestrator | 2025-06-02 17:33:54 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:33:54.558213 | orchestrator | 2025-06-02 17:33:54 | INFO  | Task 175493ae-31bb-4f31-a797-1eada3a66217 is in state STARTED 2025-06-02 17:33:54.561363 | orchestrator | 2025-06-02 17:33:54 | INFO  | Task 0a4e976d-ec0f-4334-a0ff-31e4f891333f is in state STARTED 2025-06-02 17:33:54.561522 | orchestrator | 2025-06-02 17:33:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:33:57.615766 | orchestrator | 2025-06-02 17:33:57 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:33:57.617444 | orchestrator | 2025-06-02 17:33:57 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:33:57.619853 | orchestrator | 2025-06-02 17:33:57 | INFO  | Task 175493ae-31bb-4f31-a797-1eada3a66217 is in state STARTED 2025-06-02 17:33:57.622254 | orchestrator | 2025-06-02 17:33:57 | INFO  | Task 0a4e976d-ec0f-4334-a0ff-31e4f891333f is in state STARTED 2025-06-02 17:33:57.622292 | orchestrator | 2025-06-02 17:33:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:34:00.671375 | orchestrator | 2025-06-02 17:34:00 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:34:00.672161 | orchestrator | 2025-06-02 17:34:00 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:34:00.674446 | orchestrator | 2025-06-02 17:34:00 | INFO  | Task 175493ae-31bb-4f31-a797-1eada3a66217 is in state STARTED 2025-06-02 17:34:00.675227 | orchestrator | 2025-06-02 17:34:00 | INFO  | Task 0a4e976d-ec0f-4334-a0ff-31e4f891333f is in state STARTED 2025-06-02 17:34:00.675257 | orchestrator | 2025-06-02 17:34:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:34:03.715161 | orchestrator | 2025-06-02 17:34:03 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:34:03.716592 | orchestrator | 2025-06-02 17:34:03 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:34:03.718080 | orchestrator | 2025-06-02 17:34:03 | INFO  | Task 175493ae-31bb-4f31-a797-1eada3a66217 is in state STARTED 2025-06-02 17:34:03.719935 | orchestrator | 2025-06-02 17:34:03 | INFO  | Task 0a4e976d-ec0f-4334-a0ff-31e4f891333f is in state STARTED 2025-06-02 17:34:03.719998 | orchestrator | 2025-06-02 17:34:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:34:06.779214 | orchestrator | 2025-06-02 17:34:06 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:34:06.780846 | orchestrator | 2025-06-02 17:34:06 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:34:06.782758 | orchestrator | 2025-06-02 17:34:06 | INFO  | Task 175493ae-31bb-4f31-a797-1eada3a66217 is in state STARTED 2025-06-02 17:34:06.784337 | orchestrator | 2025-06-02 17:34:06 | INFO  | Task 0a4e976d-ec0f-4334-a0ff-31e4f891333f is in state STARTED 2025-06-02 17:34:06.784370 | orchestrator | 2025-06-02 17:34:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:34:09.833906 | orchestrator | 2025-06-02 17:34:09 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:34:09.837088 | orchestrator | 2025-06-02 17:34:09 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:34:09.841076 | orchestrator | 2025-06-02 17:34:09 | INFO  | Task 175493ae-31bb-4f31-a797-1eada3a66217 is in state STARTED 2025-06-02 17:34:09.841142 | orchestrator | 2025-06-02 17:34:09 | INFO  | Task 0a4e976d-ec0f-4334-a0ff-31e4f891333f is in state STARTED 2025-06-02 17:34:09.842842 | orchestrator | 2025-06-02 17:34:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:34:12.890600 | orchestrator | 2025-06-02 17:34:12 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:34:12.893316 | orchestrator | 2025-06-02 17:34:12 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:34:12.895955 | orchestrator | 2025-06-02 17:34:12 | INFO  | Task 175493ae-31bb-4f31-a797-1eada3a66217 is in state STARTED 2025-06-02 17:34:12.898097 | orchestrator | 2025-06-02 17:34:12 | INFO  | Task 0a4e976d-ec0f-4334-a0ff-31e4f891333f is in state STARTED 2025-06-02 17:34:12.898213 | orchestrator | 2025-06-02 17:34:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:34:15.960752 | orchestrator | 2025-06-02 17:34:15 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:34:15.962077 | orchestrator | 2025-06-02 17:34:15 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:34:15.963694 | orchestrator | 2025-06-02 17:34:15 | INFO  | Task 175493ae-31bb-4f31-a797-1eada3a66217 is in state STARTED 2025-06-02 17:34:15.964628 | orchestrator | 2025-06-02 17:34:15 | INFO  | Task 0a4e976d-ec0f-4334-a0ff-31e4f891333f is in state STARTED 2025-06-02 17:34:15.964710 | orchestrator | 2025-06-02 17:34:15 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:34:19.047665 | orchestrator | 2025-06-02 17:34:19 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:34:19.051363 | orchestrator | 2025-06-02 17:34:19 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:34:19.055135 | orchestrator | 2025-06-02 17:34:19 | INFO  | Task 175493ae-31bb-4f31-a797-1eada3a66217 is in state STARTED 2025-06-02 17:34:19.057794 | orchestrator | 2025-06-02 17:34:19 | INFO  | Task 0a4e976d-ec0f-4334-a0ff-31e4f891333f is in state STARTED 2025-06-02 17:34:19.059158 | orchestrator | 2025-06-02 17:34:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:34:22.119684 | orchestrator | 2025-06-02 17:34:22 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:34:22.121058 | orchestrator | 2025-06-02 17:34:22 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:34:22.122572 | orchestrator | 2025-06-02 17:34:22 | INFO  | Task 175493ae-31bb-4f31-a797-1eada3a66217 is in state STARTED 2025-06-02 17:34:22.124378 | orchestrator | 2025-06-02 17:34:22 | INFO  | Task 0a4e976d-ec0f-4334-a0ff-31e4f891333f is in state STARTED 2025-06-02 17:34:22.125732 | orchestrator | 2025-06-02 17:34:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:34:25.191651 | orchestrator | 2025-06-02 17:34:25 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:34:25.197709 | orchestrator | 2025-06-02 17:34:25 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:34:25.198833 | orchestrator | 2025-06-02 17:34:25 | INFO  | Task 175493ae-31bb-4f31-a797-1eada3a66217 is in state STARTED 2025-06-02 17:34:25.203143 | orchestrator | 2025-06-02 17:34:25 | INFO  | Task 0a4e976d-ec0f-4334-a0ff-31e4f891333f is in state STARTED 2025-06-02 17:34:25.203251 | orchestrator | 2025-06-02 17:34:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:34:28.261321 | orchestrator | 2025-06-02 17:34:28 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:34:28.262632 | orchestrator | 2025-06-02 17:34:28 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:34:28.264288 | orchestrator | 2025-06-02 17:34:28 | INFO  | Task 175493ae-31bb-4f31-a797-1eada3a66217 is in state STARTED 2025-06-02 17:34:28.265865 | orchestrator | 2025-06-02 17:34:28 | INFO  | Task 0a4e976d-ec0f-4334-a0ff-31e4f891333f is in state STARTED 2025-06-02 17:34:28.265896 | orchestrator | 2025-06-02 17:34:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:34:31.318243 | orchestrator | 2025-06-02 17:34:31 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:34:31.318625 | orchestrator | 2025-06-02 17:34:31 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:34:31.319020 | orchestrator | 2025-06-02 17:34:31 | INFO  | Task 175493ae-31bb-4f31-a797-1eada3a66217 is in state STARTED 2025-06-02 17:34:31.320023 | orchestrator | 2025-06-02 17:34:31 | INFO  | Task 0a4e976d-ec0f-4334-a0ff-31e4f891333f is in state STARTED 2025-06-02 17:34:31.320103 | orchestrator | 2025-06-02 17:34:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:34:34.362625 | orchestrator | 2025-06-02 17:34:34 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:34:34.364449 | orchestrator | 2025-06-02 17:34:34 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:34:34.368758 | orchestrator | 2025-06-02 17:34:34 | INFO  | Task 175493ae-31bb-4f31-a797-1eada3a66217 is in state STARTED 2025-06-02 17:34:34.372235 | orchestrator | 2025-06-02 17:34:34 | INFO  | Task 0a4e976d-ec0f-4334-a0ff-31e4f891333f is in state STARTED 2025-06-02 17:34:34.372282 | orchestrator | 2025-06-02 17:34:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:34:37.430808 | orchestrator | 2025-06-02 17:34:37 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:34:37.430898 | orchestrator | 2025-06-02 17:34:37 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:34:37.431881 | orchestrator | 2025-06-02 17:34:37 | INFO  | Task 175493ae-31bb-4f31-a797-1eada3a66217 is in state STARTED 2025-06-02 17:34:37.433149 | orchestrator | 2025-06-02 17:34:37 | INFO  | Task 0a4e976d-ec0f-4334-a0ff-31e4f891333f is in state STARTED 2025-06-02 17:34:37.436214 | orchestrator | 2025-06-02 17:34:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:34:40.485222 | orchestrator | 2025-06-02 17:34:40 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:34:40.488403 | orchestrator | 2025-06-02 17:34:40 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:34:40.492817 | orchestrator | 2025-06-02 17:34:40 | INFO  | Task 175493ae-31bb-4f31-a797-1eada3a66217 is in state STARTED 2025-06-02 17:34:40.493821 | orchestrator | 2025-06-02 17:34:40 | INFO  | Task 0a4e976d-ec0f-4334-a0ff-31e4f891333f is in state SUCCESS 2025-06-02 17:34:40.493850 | orchestrator | 2025-06-02 17:34:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:34:43.538199 | orchestrator | 2025-06-02 17:34:43 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:34:43.539984 | orchestrator | 2025-06-02 17:34:43 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:34:43.541729 | orchestrator | 2025-06-02 17:34:43 | INFO  | Task 175493ae-31bb-4f31-a797-1eada3a66217 is in state STARTED 2025-06-02 17:34:43.541760 | orchestrator | 2025-06-02 17:34:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:34:46.585245 | orchestrator | 2025-06-02 17:34:46 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:34:46.585960 | orchestrator | 2025-06-02 17:34:46 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:34:46.589410 | orchestrator | 2025-06-02 17:34:46 | INFO  | Task 175493ae-31bb-4f31-a797-1eada3a66217 is in state STARTED 2025-06-02 17:34:46.589477 | orchestrator | 2025-06-02 17:34:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:34:49.638267 | orchestrator | 2025-06-02 17:34:49 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:34:49.642287 | orchestrator | 2025-06-02 17:34:49 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:34:49.644831 | orchestrator | 2025-06-02 17:34:49 | INFO  | Task 175493ae-31bb-4f31-a797-1eada3a66217 is in state STARTED 2025-06-02 17:34:49.644890 | orchestrator | 2025-06-02 17:34:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:34:52.689444 | orchestrator | 2025-06-02 17:34:52 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:34:52.690263 | orchestrator | 2025-06-02 17:34:52 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:34:52.691441 | orchestrator | 2025-06-02 17:34:52 | INFO  | Task 175493ae-31bb-4f31-a797-1eada3a66217 is in state STARTED 2025-06-02 17:34:52.691488 | orchestrator | 2025-06-02 17:34:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:34:55.750550 | orchestrator | 2025-06-02 17:34:55 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:34:55.752705 | orchestrator | 2025-06-02 17:34:55 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:34:55.755597 | orchestrator | 2025-06-02 17:34:55 | INFO  | Task 175493ae-31bb-4f31-a797-1eada3a66217 is in state STARTED 2025-06-02 17:34:55.755690 | orchestrator | 2025-06-02 17:34:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:34:58.803072 | orchestrator | 2025-06-02 17:34:58 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:34:58.803230 | orchestrator | 2025-06-02 17:34:58 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:34:58.803658 | orchestrator | 2025-06-02 17:34:58 | INFO  | Task 175493ae-31bb-4f31-a797-1eada3a66217 is in state STARTED 2025-06-02 17:34:58.803698 | orchestrator | 2025-06-02 17:34:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:35:01.848061 | orchestrator | 2025-06-02 17:35:01 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:35:01.849394 | orchestrator | 2025-06-02 17:35:01 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:35:01.852363 | orchestrator | 2025-06-02 17:35:01 | INFO  | Task 175493ae-31bb-4f31-a797-1eada3a66217 is in state STARTED 2025-06-02 17:35:01.855675 | orchestrator | 2025-06-02 17:35:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:35:04.903082 | orchestrator | 2025-06-02 17:35:04 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:35:04.906597 | orchestrator | 2025-06-02 17:35:04 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:35:04.909528 | orchestrator | 2025-06-02 17:35:04 | INFO  | Task 175493ae-31bb-4f31-a797-1eada3a66217 is in state STARTED 2025-06-02 17:35:04.909636 | orchestrator | 2025-06-02 17:35:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:35:07.973396 | orchestrator | 2025-06-02 17:35:07 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:35:07.973681 | orchestrator | 2025-06-02 17:35:07 | INFO  | Task b8bebe49-3513-4ab3-afd4-5fb6fe924ad0 is in state STARTED 2025-06-02 17:35:07.974870 | orchestrator | 2025-06-02 17:35:07 | INFO  | Task b32a6c0d-cf55-4600-98d4-84f3330ed81a is in state STARTED 2025-06-02 17:35:07.977016 | orchestrator | 2025-06-02 17:35:07 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:35:07.979721 | orchestrator | 2025-06-02 17:35:07 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:35:07.991476 | orchestrator | 2025-06-02 17:35:07 | INFO  | Task 175493ae-31bb-4f31-a797-1eada3a66217 is in state SUCCESS 2025-06-02 17:35:07.998987 | orchestrator | 2025-06-02 17:35:07.999057 | orchestrator | 2025-06-02 17:35:07.999072 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-06-02 17:35:07.999084 | orchestrator | 2025-06-02 17:35:07.999096 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-06-02 17:35:07.999112 | orchestrator | Monday 02 June 2025 17:33:12 +0000 (0:00:00.528) 0:00:00.528 *********** 2025-06-02 17:35:07.999131 | orchestrator | ok: [testbed-manager] 2025-06-02 17:35:07.999151 | orchestrator | 2025-06-02 17:35:07.999171 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-06-02 17:35:07.999189 | orchestrator | Monday 02 June 2025 17:33:13 +0000 (0:00:01.518) 0:00:02.047 *********** 2025-06-02 17:35:07.999208 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-06-02 17:35:07.999227 | orchestrator | 2025-06-02 17:35:07.999246 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-06-02 17:35:07.999264 | orchestrator | Monday 02 June 2025 17:33:14 +0000 (0:00:00.805) 0:00:02.853 *********** 2025-06-02 17:35:07.999281 | orchestrator | changed: [testbed-manager] 2025-06-02 17:35:07.999297 | orchestrator | 2025-06-02 17:35:07.999315 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-06-02 17:35:07.999340 | orchestrator | Monday 02 June 2025 17:33:15 +0000 (0:00:01.300) 0:00:04.153 *********** 2025-06-02 17:35:07.999361 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-06-02 17:35:07.999379 | orchestrator | ok: [testbed-manager] 2025-06-02 17:35:07.999426 | orchestrator | 2025-06-02 17:35:07.999444 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-06-02 17:35:07.999473 | orchestrator | Monday 02 June 2025 17:34:25 +0000 (0:01:09.165) 0:01:13.319 *********** 2025-06-02 17:35:07.999491 | orchestrator | changed: [testbed-manager] 2025-06-02 17:35:07.999510 | orchestrator | 2025-06-02 17:35:07.999528 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:35:07.999546 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:35:07.999566 | orchestrator | 2025-06-02 17:35:07.999618 | orchestrator | 2025-06-02 17:35:07.999636 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:35:07.999656 | orchestrator | Monday 02 June 2025 17:34:39 +0000 (0:00:13.890) 0:01:27.210 *********** 2025-06-02 17:35:07.999674 | orchestrator | =============================================================================== 2025-06-02 17:35:07.999692 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 69.17s 2025-06-02 17:35:07.999703 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ----------------- 13.89s 2025-06-02 17:35:07.999714 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.52s 2025-06-02 17:35:07.999724 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.30s 2025-06-02 17:35:07.999735 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.81s 2025-06-02 17:35:07.999746 | orchestrator | 2025-06-02 17:35:07.999757 | orchestrator | 2025-06-02 17:35:07.999767 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-06-02 17:35:07.999778 | orchestrator | 2025-06-02 17:35:07.999788 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-06-02 17:35:07.999799 | orchestrator | Monday 02 June 2025 17:32:40 +0000 (0:00:00.293) 0:00:00.293 *********** 2025-06-02 17:35:07.999810 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:35:07.999827 | orchestrator | 2025-06-02 17:35:07.999846 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-06-02 17:35:07.999863 | orchestrator | Monday 02 June 2025 17:32:42 +0000 (0:00:01.327) 0:00:01.621 *********** 2025-06-02 17:35:07.999880 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-02 17:35:07.999898 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-02 17:35:07.999913 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-02 17:35:07.999932 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-02 17:35:07.999949 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-02 17:35:07.999968 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-02 17:35:07.999986 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-02 17:35:08.000005 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-02 17:35:08.000020 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-02 17:35:08.000032 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-02 17:35:08.000043 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-02 17:35:08.000053 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-02 17:35:08.000064 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-02 17:35:08.000075 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-02 17:35:08.000098 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-02 17:35:08.000109 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-02 17:35:08.000137 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-02 17:35:08.000149 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-02 17:35:08.000160 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-02 17:35:08.000171 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-02 17:35:08.000182 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-02 17:35:08.000193 | orchestrator | 2025-06-02 17:35:08.000204 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-06-02 17:35:08.000215 | orchestrator | Monday 02 June 2025 17:32:46 +0000 (0:00:04.338) 0:00:05.960 *********** 2025-06-02 17:35:08.000226 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:35:08.000238 | orchestrator | 2025-06-02 17:35:08.000249 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-06-02 17:35:08.000260 | orchestrator | Monday 02 June 2025 17:32:48 +0000 (0:00:01.675) 0:00:07.635 *********** 2025-06-02 17:35:08.000276 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:35:08.000293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:35:08.000313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:35:08.000325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:35:08.000337 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:35:08.000378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:35:08.000390 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:35:08.000406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:35:08.000418 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:35:08.000429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:35:08.000441 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:35:08.000456 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:35:08.000492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:35:08.000512 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:35:08.000532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:35:08.000558 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:35:08.000637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:35:08.000659 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:35:08.000680 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:35:08.000710 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:35:08.000730 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:35:08.000749 | orchestrator | 2025-06-02 17:35:08.000768 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-06-02 17:35:08.000797 | orchestrator | Monday 02 June 2025 17:32:53 +0000 (0:00:05.001) 0:00:12.637 *********** 2025-06-02 17:35:08.000811 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 17:35:08.000829 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:35:08.000841 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:35:08.000852 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:35:08.000864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 17:35:08.000875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:35:08.000895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:35:08.000906 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:35:08.000917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 17:35:08.000944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:35:08.000956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:35:08.000967 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:35:08.000992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 17:35:08.001012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:35:08.001032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:35:08.001061 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:35:08.001079 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 17:35:08.001097 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:35:08.001125 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:35:08.001146 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:35:08.001166 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 17:35:08.001193 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:35:08.001212 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:35:08.001230 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:35:08.001242 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 17:35:08.001262 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:35:08.001273 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:35:08.001284 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:35:08.001295 | orchestrator | 2025-06-02 17:35:08.001307 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-06-02 17:35:08.001318 | orchestrator | Monday 02 June 2025 17:32:54 +0000 (0:00:01.202) 0:00:13.840 *********** 2025-06-02 17:35:08.001329 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 17:35:08.001349 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:35:08.001360 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:35:08.001372 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:35:08.001387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 17:35:08.001399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:35:08.001419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:35:08.001431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 17:35:08.001442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:35:08.002174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:35:08.002219 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:35:08.002230 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:35:08.002241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 17:35:08.002253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:35:08.002264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:35:08.002285 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:35:08.002295 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 17:35:08.002305 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:35:08.002321 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:35:08.002331 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:35:08.002341 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 17:35:08.002363 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:35:08.002374 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:35:08.002384 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:35:08.002398 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 17:35:08.002414 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:35:08.002425 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:35:08.002435 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:35:08.002445 | orchestrator | 2025-06-02 17:35:08.002455 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-06-02 17:35:08.002465 | orchestrator | Monday 02 June 2025 17:32:57 +0000 (0:00:02.667) 0:00:16.507 *********** 2025-06-02 17:35:08.002475 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:35:08.002484 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:35:08.002494 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:35:08.002503 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:35:08.002513 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:35:08.002522 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:35:08.002531 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:35:08.002541 | orchestrator | 2025-06-02 17:35:08.002550 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-06-02 17:35:08.002560 | orchestrator | Monday 02 June 2025 17:32:57 +0000 (0:00:00.799) 0:00:17.307 *********** 2025-06-02 17:35:08.002589 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:35:08.002601 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:35:08.002613 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:35:08.002624 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:35:08.002635 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:35:08.002646 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:35:08.002658 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:35:08.002670 | orchestrator | 2025-06-02 17:35:08.002681 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-06-02 17:35:08.002693 | orchestrator | Monday 02 June 2025 17:32:59 +0000 (0:00:01.126) 0:00:18.434 *********** 2025-06-02 17:35:08.002711 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:35:08.002724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:35:08.002748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:35:08.002761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:35:08.002773 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:35:08.002784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:35:08.002796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:35:08.002826 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:35:08.002839 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:35:08.002865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:35:08.002877 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:35:08.002887 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:35:08.002897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:35:08.002907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:35:08.002917 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:35:08.002939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:35:08.002955 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:35:08.002969 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:35:08.002979 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:35:08.002990 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:35:08.003000 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:35:08.003010 | orchestrator | 2025-06-02 17:35:08.003019 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-06-02 17:35:08.003029 | orchestrator | Monday 02 June 2025 17:33:04 +0000 (0:00:05.857) 0:00:24.291 *********** 2025-06-02 17:35:08.003039 | orchestrator | [WARNING]: Skipped 2025-06-02 17:35:08.003050 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-06-02 17:35:08.003060 | orchestrator | to this access issue: 2025-06-02 17:35:08.003069 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-06-02 17:35:08.003079 | orchestrator | directory 2025-06-02 17:35:08.003088 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 17:35:08.003098 | orchestrator | 2025-06-02 17:35:08.003108 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-06-02 17:35:08.003117 | orchestrator | Monday 02 June 2025 17:33:07 +0000 (0:00:02.115) 0:00:26.407 *********** 2025-06-02 17:35:08.003126 | orchestrator | [WARNING]: Skipped 2025-06-02 17:35:08.003136 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-06-02 17:35:08.003145 | orchestrator | to this access issue: 2025-06-02 17:35:08.003154 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-06-02 17:35:08.003170 | orchestrator | directory 2025-06-02 17:35:08.003180 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 17:35:08.003189 | orchestrator | 2025-06-02 17:35:08.003198 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-06-02 17:35:08.003208 | orchestrator | Monday 02 June 2025 17:33:08 +0000 (0:00:01.242) 0:00:27.650 *********** 2025-06-02 17:35:08.003217 | orchestrator | [WARNING]: Skipped 2025-06-02 17:35:08.003227 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-06-02 17:35:08.003236 | orchestrator | to this access issue: 2025-06-02 17:35:08.003246 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-06-02 17:35:08.003255 | orchestrator | directory 2025-06-02 17:35:08.003265 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 17:35:08.003274 | orchestrator | 2025-06-02 17:35:08.003288 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-06-02 17:35:08.003298 | orchestrator | Monday 02 June 2025 17:33:09 +0000 (0:00:00.918) 0:00:28.568 *********** 2025-06-02 17:35:08.003308 | orchestrator | [WARNING]: Skipped 2025-06-02 17:35:08.003317 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-06-02 17:35:08.003327 | orchestrator | to this access issue: 2025-06-02 17:35:08.003336 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-06-02 17:35:08.003346 | orchestrator | directory 2025-06-02 17:35:08.003355 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 17:35:08.003365 | orchestrator | 2025-06-02 17:35:08.003374 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-06-02 17:35:08.003384 | orchestrator | Monday 02 June 2025 17:33:10 +0000 (0:00:00.983) 0:00:29.552 *********** 2025-06-02 17:35:08.003393 | orchestrator | changed: [testbed-manager] 2025-06-02 17:35:08.003403 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:35:08.003412 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:35:08.003422 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:35:08.003431 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:35:08.003440 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:35:08.003450 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:35:08.003459 | orchestrator | 2025-06-02 17:35:08.003468 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-06-02 17:35:08.003478 | orchestrator | Monday 02 June 2025 17:33:15 +0000 (0:00:05.506) 0:00:35.058 *********** 2025-06-02 17:35:08.003487 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-02 17:35:08.003502 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-02 17:35:08.003512 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-02 17:35:08.003521 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-02 17:35:08.003531 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-02 17:35:08.003540 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-02 17:35:08.003550 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-02 17:35:08.003559 | orchestrator | 2025-06-02 17:35:08.003587 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-06-02 17:35:08.003597 | orchestrator | Monday 02 June 2025 17:33:19 +0000 (0:00:03.503) 0:00:38.561 *********** 2025-06-02 17:35:08.003607 | orchestrator | changed: [testbed-manager] 2025-06-02 17:35:08.003617 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:35:08.003626 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:35:08.003636 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:35:08.003645 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:35:08.003658 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:35:08.003668 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:35:08.003678 | orchestrator | 2025-06-02 17:35:08.003688 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-06-02 17:35:08.003697 | orchestrator | Monday 02 June 2025 17:33:21 +0000 (0:00:02.730) 0:00:41.292 *********** 2025-06-02 17:35:08.003707 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:35:08.003718 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:35:08.003729 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:35:08.003747 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:35:08.003757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:35:08.003771 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:35:08.003782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:35:08.003797 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:35:08.003807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:35:08.003817 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:35:08.003832 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:35:08.003842 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:35:08.003853 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:35:08.003867 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:35:08.003882 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:35:08.003892 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:35:08.003903 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:35:08.003913 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:35:08.003932 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:35:08.003943 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:35:08.003953 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:35:08.003963 | orchestrator | 2025-06-02 17:35:08.003973 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-06-02 17:35:08.003988 | orchestrator | Monday 02 June 2025 17:33:25 +0000 (0:00:03.151) 0:00:44.444 *********** 2025-06-02 17:35:08.003997 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-02 17:35:08.004007 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-02 17:35:08.004017 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-02 17:35:08.004026 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-02 17:35:08.004036 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-02 17:35:08.004046 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-02 17:35:08.004055 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-02 17:35:08.004065 | orchestrator | 2025-06-02 17:35:08.004074 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-06-02 17:35:08.004084 | orchestrator | Monday 02 June 2025 17:33:28 +0000 (0:00:03.391) 0:00:47.836 *********** 2025-06-02 17:35:08.004094 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-02 17:35:08.004103 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-02 17:35:08.004113 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-02 17:35:08.004122 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-02 17:35:08.004132 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-02 17:35:08.004142 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-02 17:35:08.004151 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-02 17:35:08.004161 | orchestrator | 2025-06-02 17:35:08.004170 | orchestrator | TASK [common : Check common containers] **************************************** 2025-06-02 17:35:08.004180 | orchestrator | Monday 02 June 2025 17:33:31 +0000 (0:00:02.624) 0:00:50.460 *********** 2025-06-02 17:35:08.004190 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:35:08.004207 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:35:08.004233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:35:08.004250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:35:08.004282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:35:08.004299 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:35:08.004315 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:35:08.004326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:35:08.004342 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:35:08.004352 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:35:08.004372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:35:08.004386 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:35:08.004396 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:35:08.004406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:35:08.004417 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:35:08.004427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:35:08.004437 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:35:08.004453 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:35:08.004469 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:35:08.004483 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:35:08.004494 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:35:08.004504 | orchestrator | 2025-06-02 17:35:08.004514 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-06-02 17:35:08.004523 | orchestrator | Monday 02 June 2025 17:33:34 +0000 (0:00:03.781) 0:00:54.242 *********** 2025-06-02 17:35:08.004533 | orchestrator | changed: [testbed-manager] 2025-06-02 17:35:08.004543 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:35:08.004552 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:35:08.004562 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:35:08.004589 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:35:08.004599 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:35:08.004608 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:35:08.004618 | orchestrator | 2025-06-02 17:35:08.004629 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-06-02 17:35:08.004639 | orchestrator | Monday 02 June 2025 17:33:36 +0000 (0:00:01.576) 0:00:55.819 *********** 2025-06-02 17:35:08.004650 | orchestrator | changed: [testbed-manager] 2025-06-02 17:35:08.004661 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:35:08.004672 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:35:08.004682 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:35:08.004693 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:35:08.004704 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:35:08.004715 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:35:08.004726 | orchestrator | 2025-06-02 17:35:08.004736 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-02 17:35:08.004747 | orchestrator | Monday 02 June 2025 17:33:37 +0000 (0:00:01.181) 0:00:57.000 *********** 2025-06-02 17:35:08.004758 | orchestrator | 2025-06-02 17:35:08.004769 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-02 17:35:08.004779 | orchestrator | Monday 02 June 2025 17:33:37 +0000 (0:00:00.238) 0:00:57.238 *********** 2025-06-02 17:35:08.004790 | orchestrator | 2025-06-02 17:35:08.004801 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-02 17:35:08.004812 | orchestrator | Monday 02 June 2025 17:33:37 +0000 (0:00:00.065) 0:00:57.303 *********** 2025-06-02 17:35:08.004822 | orchestrator | 2025-06-02 17:35:08.004840 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-02 17:35:08.004850 | orchestrator | Monday 02 June 2025 17:33:38 +0000 (0:00:00.069) 0:00:57.373 *********** 2025-06-02 17:35:08.004861 | orchestrator | 2025-06-02 17:35:08.004872 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-02 17:35:08.004883 | orchestrator | Monday 02 June 2025 17:33:38 +0000 (0:00:00.063) 0:00:57.436 *********** 2025-06-02 17:35:08.004893 | orchestrator | 2025-06-02 17:35:08.004904 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-02 17:35:08.004915 | orchestrator | Monday 02 June 2025 17:33:38 +0000 (0:00:00.063) 0:00:57.499 *********** 2025-06-02 17:35:08.004926 | orchestrator | 2025-06-02 17:35:08.004936 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-02 17:35:08.004947 | orchestrator | Monday 02 June 2025 17:33:38 +0000 (0:00:00.061) 0:00:57.561 *********** 2025-06-02 17:35:08.004958 | orchestrator | 2025-06-02 17:35:08.004969 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-06-02 17:35:08.004979 | orchestrator | Monday 02 June 2025 17:33:38 +0000 (0:00:00.082) 0:00:57.643 *********** 2025-06-02 17:35:08.004996 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:35:08.005007 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:35:08.005018 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:35:08.005029 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:35:08.005040 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:35:08.005050 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:35:08.005061 | orchestrator | changed: [testbed-manager] 2025-06-02 17:35:08.005072 | orchestrator | 2025-06-02 17:35:08.005082 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-06-02 17:35:08.005093 | orchestrator | Monday 02 June 2025 17:34:19 +0000 (0:00:41.037) 0:01:38.680 *********** 2025-06-02 17:35:08.005104 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:35:08.005115 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:35:08.005125 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:35:08.005136 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:35:08.005146 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:35:08.005157 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:35:08.005168 | orchestrator | changed: [testbed-manager] 2025-06-02 17:35:08.005178 | orchestrator | 2025-06-02 17:35:08.005189 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-06-02 17:35:08.005200 | orchestrator | Monday 02 June 2025 17:34:58 +0000 (0:00:38.915) 0:02:17.595 *********** 2025-06-02 17:35:08.005211 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:35:08.005222 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:35:08.005233 | orchestrator | ok: [testbed-manager] 2025-06-02 17:35:08.005244 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:35:08.005254 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:35:08.005265 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:35:08.005276 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:35:08.005287 | orchestrator | 2025-06-02 17:35:08.005297 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-06-02 17:35:08.005313 | orchestrator | Monday 02 June 2025 17:35:00 +0000 (0:00:02.274) 0:02:19.870 *********** 2025-06-02 17:35:08.005324 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:35:08.005335 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:35:08.005346 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:35:08.005357 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:35:08.005367 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:35:08.005378 | orchestrator | changed: [testbed-manager] 2025-06-02 17:35:08.005389 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:35:08.005399 | orchestrator | 2025-06-02 17:35:08.005410 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:35:08.005422 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-02 17:35:08.005440 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-02 17:35:08.005451 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-02 17:35:08.005462 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-02 17:35:08.005473 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-02 17:35:08.005484 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-02 17:35:08.005495 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-02 17:35:08.005505 | orchestrator | 2025-06-02 17:35:08.005517 | orchestrator | 2025-06-02 17:35:08.005527 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:35:08.005538 | orchestrator | Monday 02 June 2025 17:35:05 +0000 (0:00:05.090) 0:02:24.960 *********** 2025-06-02 17:35:08.005549 | orchestrator | =============================================================================== 2025-06-02 17:35:08.005560 | orchestrator | common : Restart fluentd container ------------------------------------- 41.04s 2025-06-02 17:35:08.005598 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 38.92s 2025-06-02 17:35:08.005610 | orchestrator | common : Copying over config.json files for services -------------------- 5.86s 2025-06-02 17:35:08.005621 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 5.51s 2025-06-02 17:35:08.005631 | orchestrator | common : Restart cron container ----------------------------------------- 5.09s 2025-06-02 17:35:08.005642 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.00s 2025-06-02 17:35:08.005653 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.34s 2025-06-02 17:35:08.005663 | orchestrator | common : Check common containers ---------------------------------------- 3.78s 2025-06-02 17:35:08.005674 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.50s 2025-06-02 17:35:08.005685 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.39s 2025-06-02 17:35:08.005696 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.15s 2025-06-02 17:35:08.005706 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.73s 2025-06-02 17:35:08.005717 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.67s 2025-06-02 17:35:08.005727 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.63s 2025-06-02 17:35:08.005744 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.27s 2025-06-02 17:35:08.005755 | orchestrator | common : Find custom fluentd input config files ------------------------- 2.12s 2025-06-02 17:35:08.005766 | orchestrator | common : include_tasks -------------------------------------------------- 1.68s 2025-06-02 17:35:08.005776 | orchestrator | common : Creating log volume -------------------------------------------- 1.58s 2025-06-02 17:35:08.005787 | orchestrator | common : include_tasks -------------------------------------------------- 1.33s 2025-06-02 17:35:08.005798 | orchestrator | common : Find custom fluentd filter config files ------------------------ 1.24s 2025-06-02 17:35:08.008152 | orchestrator | 2025-06-02 17:35:08 | INFO  | Task 0319920a-b556-4a67-bc17-7665313f7dc2 is in state STARTED 2025-06-02 17:35:08.008209 | orchestrator | 2025-06-02 17:35:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:35:11.073815 | orchestrator | 2025-06-02 17:35:11 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:35:11.073958 | orchestrator | 2025-06-02 17:35:11 | INFO  | Task b8bebe49-3513-4ab3-afd4-5fb6fe924ad0 is in state STARTED 2025-06-02 17:35:11.074314 | orchestrator | 2025-06-02 17:35:11 | INFO  | Task b32a6c0d-cf55-4600-98d4-84f3330ed81a is in state STARTED 2025-06-02 17:35:11.075536 | orchestrator | 2025-06-02 17:35:11 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:35:11.078074 | orchestrator | 2025-06-02 17:35:11 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:35:11.079634 | orchestrator | 2025-06-02 17:35:11 | INFO  | Task 0319920a-b556-4a67-bc17-7665313f7dc2 is in state STARTED 2025-06-02 17:35:11.079705 | orchestrator | 2025-06-02 17:35:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:35:14.129049 | orchestrator | 2025-06-02 17:35:14 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:35:14.129162 | orchestrator | 2025-06-02 17:35:14 | INFO  | Task b8bebe49-3513-4ab3-afd4-5fb6fe924ad0 is in state STARTED 2025-06-02 17:35:14.129734 | orchestrator | 2025-06-02 17:35:14 | INFO  | Task b32a6c0d-cf55-4600-98d4-84f3330ed81a is in state STARTED 2025-06-02 17:35:14.131098 | orchestrator | 2025-06-02 17:35:14 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:35:14.131428 | orchestrator | 2025-06-02 17:35:14 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:35:14.132371 | orchestrator | 2025-06-02 17:35:14 | INFO  | Task 0319920a-b556-4a67-bc17-7665313f7dc2 is in state STARTED 2025-06-02 17:35:14.132453 | orchestrator | 2025-06-02 17:35:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:35:17.165560 | orchestrator | 2025-06-02 17:35:17 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:35:17.165717 | orchestrator | 2025-06-02 17:35:17 | INFO  | Task b8bebe49-3513-4ab3-afd4-5fb6fe924ad0 is in state STARTED 2025-06-02 17:35:17.165914 | orchestrator | 2025-06-02 17:35:17 | INFO  | Task b32a6c0d-cf55-4600-98d4-84f3330ed81a is in state STARTED 2025-06-02 17:35:17.167464 | orchestrator | 2025-06-02 17:35:17 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:35:17.167507 | orchestrator | 2025-06-02 17:35:17 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:35:17.168440 | orchestrator | 2025-06-02 17:35:17 | INFO  | Task 0319920a-b556-4a67-bc17-7665313f7dc2 is in state STARTED 2025-06-02 17:35:17.168477 | orchestrator | 2025-06-02 17:35:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:35:20.221265 | orchestrator | 2025-06-02 17:35:20 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:35:20.221391 | orchestrator | 2025-06-02 17:35:20 | INFO  | Task b8bebe49-3513-4ab3-afd4-5fb6fe924ad0 is in state STARTED 2025-06-02 17:35:20.221409 | orchestrator | 2025-06-02 17:35:20 | INFO  | Task b32a6c0d-cf55-4600-98d4-84f3330ed81a is in state STARTED 2025-06-02 17:35:20.221833 | orchestrator | 2025-06-02 17:35:20 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:35:20.222787 | orchestrator | 2025-06-02 17:35:20 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:35:20.223265 | orchestrator | 2025-06-02 17:35:20 | INFO  | Task 0319920a-b556-4a67-bc17-7665313f7dc2 is in state STARTED 2025-06-02 17:35:20.223304 | orchestrator | 2025-06-02 17:35:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:35:23.271498 | orchestrator | 2025-06-02 17:35:23 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:35:23.273459 | orchestrator | 2025-06-02 17:35:23 | INFO  | Task b8bebe49-3513-4ab3-afd4-5fb6fe924ad0 is in state STARTED 2025-06-02 17:35:23.274463 | orchestrator | 2025-06-02 17:35:23 | INFO  | Task b32a6c0d-cf55-4600-98d4-84f3330ed81a is in state STARTED 2025-06-02 17:35:23.275303 | orchestrator | 2025-06-02 17:35:23 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:35:23.276341 | orchestrator | 2025-06-02 17:35:23 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:35:23.277196 | orchestrator | 2025-06-02 17:35:23 | INFO  | Task 0319920a-b556-4a67-bc17-7665313f7dc2 is in state STARTED 2025-06-02 17:35:23.277358 | orchestrator | 2025-06-02 17:35:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:35:26.326158 | orchestrator | 2025-06-02 17:35:26 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:35:26.326243 | orchestrator | 2025-06-02 17:35:26 | INFO  | Task b8bebe49-3513-4ab3-afd4-5fb6fe924ad0 is in state STARTED 2025-06-02 17:35:26.328084 | orchestrator | 2025-06-02 17:35:26 | INFO  | Task b32a6c0d-cf55-4600-98d4-84f3330ed81a is in state STARTED 2025-06-02 17:35:26.330116 | orchestrator | 2025-06-02 17:35:26 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:35:26.337162 | orchestrator | 2025-06-02 17:35:26 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:35:26.342298 | orchestrator | 2025-06-02 17:35:26 | INFO  | Task 0319920a-b556-4a67-bc17-7665313f7dc2 is in state STARTED 2025-06-02 17:35:26.342375 | orchestrator | 2025-06-02 17:35:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:35:29.377508 | orchestrator | 2025-06-02 17:35:29 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:35:29.380036 | orchestrator | 2025-06-02 17:35:29 | INFO  | Task b8bebe49-3513-4ab3-afd4-5fb6fe924ad0 is in state STARTED 2025-06-02 17:35:29.383033 | orchestrator | 2025-06-02 17:35:29 | INFO  | Task b32a6c0d-cf55-4600-98d4-84f3330ed81a is in state STARTED 2025-06-02 17:35:29.385612 | orchestrator | 2025-06-02 17:35:29 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:35:29.387783 | orchestrator | 2025-06-02 17:35:29 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:35:29.389881 | orchestrator | 2025-06-02 17:35:29 | INFO  | Task 0319920a-b556-4a67-bc17-7665313f7dc2 is in state STARTED 2025-06-02 17:35:29.389925 | orchestrator | 2025-06-02 17:35:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:35:32.519338 | orchestrator | 2025-06-02 17:35:32 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:35:32.523226 | orchestrator | 2025-06-02 17:35:32 | INFO  | Task b8bebe49-3513-4ab3-afd4-5fb6fe924ad0 is in state STARTED 2025-06-02 17:35:32.527812 | orchestrator | 2025-06-02 17:35:32 | INFO  | Task b32a6c0d-cf55-4600-98d4-84f3330ed81a is in state STARTED 2025-06-02 17:35:32.532682 | orchestrator | 2025-06-02 17:35:32 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:35:32.546685 | orchestrator | 2025-06-02 17:35:32 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:35:32.557813 | orchestrator | 2025-06-02 17:35:32 | INFO  | Task 0319920a-b556-4a67-bc17-7665313f7dc2 is in state STARTED 2025-06-02 17:35:32.557910 | orchestrator | 2025-06-02 17:35:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:35:35.605947 | orchestrator | 2025-06-02 17:35:35 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:35:35.606189 | orchestrator | 2025-06-02 17:35:35 | INFO  | Task b8bebe49-3513-4ab3-afd4-5fb6fe924ad0 is in state SUCCESS 2025-06-02 17:35:35.613720 | orchestrator | 2025-06-02 17:35:35 | INFO  | Task b32a6c0d-cf55-4600-98d4-84f3330ed81a is in state STARTED 2025-06-02 17:35:35.613989 | orchestrator | 2025-06-02 17:35:35 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:35:35.614681 | orchestrator | 2025-06-02 17:35:35 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:35:35.616948 | orchestrator | 2025-06-02 17:35:35 | INFO  | Task 1dffcb9c-6b04-44b4-ab86-5cdc9108be47 is in state STARTED 2025-06-02 17:35:35.618664 | orchestrator | 2025-06-02 17:35:35 | INFO  | Task 0319920a-b556-4a67-bc17-7665313f7dc2 is in state STARTED 2025-06-02 17:35:35.618722 | orchestrator | 2025-06-02 17:35:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:35:38.663958 | orchestrator | 2025-06-02 17:35:38 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:35:38.665647 | orchestrator | 2025-06-02 17:35:38 | INFO  | Task b32a6c0d-cf55-4600-98d4-84f3330ed81a is in state STARTED 2025-06-02 17:35:38.665709 | orchestrator | 2025-06-02 17:35:38 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:35:38.667491 | orchestrator | 2025-06-02 17:35:38 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:35:38.670306 | orchestrator | 2025-06-02 17:35:38 | INFO  | Task 1dffcb9c-6b04-44b4-ab86-5cdc9108be47 is in state STARTED 2025-06-02 17:35:38.670499 | orchestrator | 2025-06-02 17:35:38 | INFO  | Task 0319920a-b556-4a67-bc17-7665313f7dc2 is in state STARTED 2025-06-02 17:35:38.670531 | orchestrator | 2025-06-02 17:35:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:35:41.738369 | orchestrator | 2025-06-02 17:35:41 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:35:41.738961 | orchestrator | 2025-06-02 17:35:41 | INFO  | Task b32a6c0d-cf55-4600-98d4-84f3330ed81a is in state STARTED 2025-06-02 17:35:41.742448 | orchestrator | 2025-06-02 17:35:41 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:35:41.743141 | orchestrator | 2025-06-02 17:35:41 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:35:41.744180 | orchestrator | 2025-06-02 17:35:41 | INFO  | Task 1dffcb9c-6b04-44b4-ab86-5cdc9108be47 is in state STARTED 2025-06-02 17:35:41.748523 | orchestrator | 2025-06-02 17:35:41 | INFO  | Task 0319920a-b556-4a67-bc17-7665313f7dc2 is in state STARTED 2025-06-02 17:35:41.748645 | orchestrator | 2025-06-02 17:35:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:35:44.777520 | orchestrator | 2025-06-02 17:35:44 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:35:44.777798 | orchestrator | 2025-06-02 17:35:44 | INFO  | Task b32a6c0d-cf55-4600-98d4-84f3330ed81a is in state SUCCESS 2025-06-02 17:35:44.778835 | orchestrator | 2025-06-02 17:35:44.778888 | orchestrator | 2025-06-02 17:35:44.778896 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 17:35:44.778902 | orchestrator | 2025-06-02 17:35:44.778906 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 17:35:44.778910 | orchestrator | Monday 02 June 2025 17:35:14 +0000 (0:00:00.320) 0:00:00.320 *********** 2025-06-02 17:35:44.778915 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:35:44.778920 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:35:44.778924 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:35:44.778928 | orchestrator | 2025-06-02 17:35:44.778932 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 17:35:44.778955 | orchestrator | Monday 02 June 2025 17:35:14 +0000 (0:00:00.300) 0:00:00.621 *********** 2025-06-02 17:35:44.778960 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-06-02 17:35:44.778964 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-06-02 17:35:44.778970 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-06-02 17:35:44.778976 | orchestrator | 2025-06-02 17:35:44.778982 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-06-02 17:35:44.778987 | orchestrator | 2025-06-02 17:35:44.778993 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-06-02 17:35:44.778999 | orchestrator | Monday 02 June 2025 17:35:14 +0000 (0:00:00.385) 0:00:01.006 *********** 2025-06-02 17:35:44.779005 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:35:44.779013 | orchestrator | 2025-06-02 17:35:44.779019 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-06-02 17:35:44.779025 | orchestrator | Monday 02 June 2025 17:35:15 +0000 (0:00:01.024) 0:00:02.031 *********** 2025-06-02 17:35:44.779031 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-06-02 17:35:44.779038 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-06-02 17:35:44.779044 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-06-02 17:35:44.779050 | orchestrator | 2025-06-02 17:35:44.779056 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-06-02 17:35:44.779063 | orchestrator | Monday 02 June 2025 17:35:17 +0000 (0:00:01.137) 0:00:03.173 *********** 2025-06-02 17:35:44.779069 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-06-02 17:35:44.779076 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-06-02 17:35:44.779082 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-06-02 17:35:44.779088 | orchestrator | 2025-06-02 17:35:44.779094 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-06-02 17:35:44.779100 | orchestrator | Monday 02 June 2025 17:35:20 +0000 (0:00:03.083) 0:00:06.257 *********** 2025-06-02 17:35:44.779107 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:35:44.779113 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:35:44.779120 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:35:44.779126 | orchestrator | 2025-06-02 17:35:44.779132 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-06-02 17:35:44.779138 | orchestrator | Monday 02 June 2025 17:35:22 +0000 (0:00:02.696) 0:00:08.954 *********** 2025-06-02 17:35:44.779145 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:35:44.779151 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:35:44.779157 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:35:44.779163 | orchestrator | 2025-06-02 17:35:44.779168 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:35:44.779176 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:35:44.779182 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:35:44.779186 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:35:44.779190 | orchestrator | 2025-06-02 17:35:44.779194 | orchestrator | 2025-06-02 17:35:44.779198 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:35:44.779202 | orchestrator | Monday 02 June 2025 17:35:32 +0000 (0:00:09.402) 0:00:18.356 *********** 2025-06-02 17:35:44.779206 | orchestrator | =============================================================================== 2025-06-02 17:35:44.779210 | orchestrator | memcached : Restart memcached container --------------------------------- 9.40s 2025-06-02 17:35:44.779214 | orchestrator | memcached : Copying over config.json files for services ----------------- 3.08s 2025-06-02 17:35:44.779259 | orchestrator | memcached : Check memcached container ----------------------------------- 2.70s 2025-06-02 17:35:44.779264 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.14s 2025-06-02 17:35:44.779268 | orchestrator | memcached : include_tasks ----------------------------------------------- 1.02s 2025-06-02 17:35:44.779282 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.39s 2025-06-02 17:35:44.779286 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2025-06-02 17:35:44.779289 | orchestrator | 2025-06-02 17:35:44.779293 | orchestrator | 2025-06-02 17:35:44.779297 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 17:35:44.779301 | orchestrator | 2025-06-02 17:35:44.779305 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 17:35:44.779308 | orchestrator | Monday 02 June 2025 17:35:14 +0000 (0:00:00.241) 0:00:00.241 *********** 2025-06-02 17:35:44.779312 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:35:44.779316 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:35:44.779320 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:35:44.779324 | orchestrator | 2025-06-02 17:35:44.779328 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 17:35:44.779344 | orchestrator | Monday 02 June 2025 17:35:14 +0000 (0:00:00.303) 0:00:00.545 *********** 2025-06-02 17:35:44.779348 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-06-02 17:35:44.779352 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-06-02 17:35:44.779355 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-06-02 17:35:44.779359 | orchestrator | 2025-06-02 17:35:44.779363 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-06-02 17:35:44.779366 | orchestrator | 2025-06-02 17:35:44.779370 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-06-02 17:35:44.779374 | orchestrator | Monday 02 June 2025 17:35:15 +0000 (0:00:00.381) 0:00:00.926 *********** 2025-06-02 17:35:44.779378 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:35:44.779383 | orchestrator | 2025-06-02 17:35:44.779388 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-06-02 17:35:44.779395 | orchestrator | Monday 02 June 2025 17:35:15 +0000 (0:00:00.645) 0:00:01.572 *********** 2025-06-02 17:35:44.779403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 17:35:44.779434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 17:35:44.779442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 17:35:44.779455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 17:35:44.779464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 17:35:44.779479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 17:35:44.779487 | orchestrator | 2025-06-02 17:35:44.779494 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-06-02 17:35:44.779501 | orchestrator | Monday 02 June 2025 17:35:17 +0000 (0:00:01.927) 0:00:03.500 *********** 2025-06-02 17:35:44.779521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 17:35:44.779530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 17:35:44.779538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 17:35:44.779558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 17:35:44.779569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 17:35:44.779631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 17:35:44.779641 | orchestrator | 2025-06-02 17:35:44.779648 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-06-02 17:35:44.779655 | orchestrator | Monday 02 June 2025 17:35:21 +0000 (0:00:03.674) 0:00:07.174 *********** 2025-06-02 17:35:44.779662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 17:35:44.779670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 17:35:44.779676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 17:35:44.779689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 17:35:44.779700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 17:35:44.779713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 17:35:44.779720 | orchestrator | 2025-06-02 17:35:44.779726 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-06-02 17:35:44.779733 | orchestrator | Monday 02 June 2025 17:35:25 +0000 (0:00:04.084) 0:00:11.259 *********** 2025-06-02 17:35:44.779739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 17:35:44.779746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 17:35:44.779756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 17:35:44.779761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 17:35:44.779769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 17:35:44.779776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 17:35:44.779781 | orchestrator | 2025-06-02 17:35:44.779785 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-02 17:35:44.779790 | orchestrator | Monday 02 June 2025 17:35:27 +0000 (0:00:02.450) 0:00:13.710 *********** 2025-06-02 17:35:44.779794 | orchestrator | 2025-06-02 17:35:44.779799 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-02 17:35:44.779803 | orchestrator | Monday 02 June 2025 17:35:28 +0000 (0:00:00.161) 0:00:13.872 *********** 2025-06-02 17:35:44.779808 | orchestrator | 2025-06-02 17:35:44.779812 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-02 17:35:44.779817 | orchestrator | Monday 02 June 2025 17:35:28 +0000 (0:00:00.154) 0:00:14.026 *********** 2025-06-02 17:35:44.779821 | orchestrator | 2025-06-02 17:35:44.779826 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-06-02 17:35:44.779829 | orchestrator | Monday 02 June 2025 17:35:28 +0000 (0:00:00.153) 0:00:14.180 *********** 2025-06-02 17:35:44.779833 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:35:44.779838 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:35:44.779841 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:35:44.779845 | orchestrator | 2025-06-02 17:35:44.779853 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-06-02 17:35:44.779856 | orchestrator | Monday 02 June 2025 17:35:35 +0000 (0:00:07.350) 0:00:21.531 *********** 2025-06-02 17:35:44.779860 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:35:44.779864 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:35:44.779868 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:35:44.779871 | orchestrator | 2025-06-02 17:35:44.779875 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:35:44.779879 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:35:44.779883 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:35:44.779887 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:35:44.779891 | orchestrator | 2025-06-02 17:35:44.779895 | orchestrator | 2025-06-02 17:35:44.779898 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:35:44.779902 | orchestrator | Monday 02 June 2025 17:35:43 +0000 (0:00:07.778) 0:00:29.309 *********** 2025-06-02 17:35:44.779906 | orchestrator | =============================================================================== 2025-06-02 17:35:44.779936 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 7.78s 2025-06-02 17:35:44.779940 | orchestrator | redis : Restart redis container ----------------------------------------- 7.35s 2025-06-02 17:35:44.779944 | orchestrator | redis : Copying over redis config files --------------------------------- 4.08s 2025-06-02 17:35:44.779947 | orchestrator | redis : Copying over default config.json files -------------------------- 3.67s 2025-06-02 17:35:44.779951 | orchestrator | redis : Check redis containers ------------------------------------------ 2.45s 2025-06-02 17:35:44.779955 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.93s 2025-06-02 17:35:44.779959 | orchestrator | redis : include_tasks --------------------------------------------------- 0.65s 2025-06-02 17:35:44.779963 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.47s 2025-06-02 17:35:44.779967 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.38s 2025-06-02 17:35:44.779970 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2025-06-02 17:35:44.779974 | orchestrator | 2025-06-02 17:35:44 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:35:44.780040 | orchestrator | 2025-06-02 17:35:44 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:35:44.780046 | orchestrator | 2025-06-02 17:35:44 | INFO  | Task 1dffcb9c-6b04-44b4-ab86-5cdc9108be47 is in state STARTED 2025-06-02 17:35:44.780825 | orchestrator | 2025-06-02 17:35:44 | INFO  | Task 0319920a-b556-4a67-bc17-7665313f7dc2 is in state STARTED 2025-06-02 17:35:44.780874 | orchestrator | 2025-06-02 17:35:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:35:47.818961 | orchestrator | 2025-06-02 17:35:47 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:35:47.819027 | orchestrator | 2025-06-02 17:35:47 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:35:47.819512 | orchestrator | 2025-06-02 17:35:47 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:35:47.820864 | orchestrator | 2025-06-02 17:35:47 | INFO  | Task 1dffcb9c-6b04-44b4-ab86-5cdc9108be47 is in state STARTED 2025-06-02 17:35:47.823705 | orchestrator | 2025-06-02 17:35:47 | INFO  | Task 0319920a-b556-4a67-bc17-7665313f7dc2 is in state STARTED 2025-06-02 17:35:47.824227 | orchestrator | 2025-06-02 17:35:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:35:50.862900 | orchestrator | 2025-06-02 17:35:50 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:35:50.863341 | orchestrator | 2025-06-02 17:35:50 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:35:50.864559 | orchestrator | 2025-06-02 17:35:50 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:35:50.865571 | orchestrator | 2025-06-02 17:35:50 | INFO  | Task 1dffcb9c-6b04-44b4-ab86-5cdc9108be47 is in state STARTED 2025-06-02 17:35:50.868954 | orchestrator | 2025-06-02 17:35:50 | INFO  | Task 0319920a-b556-4a67-bc17-7665313f7dc2 is in state STARTED 2025-06-02 17:35:50.869357 | orchestrator | 2025-06-02 17:35:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:35:53.913430 | orchestrator | 2025-06-02 17:35:53 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:35:53.913950 | orchestrator | 2025-06-02 17:35:53 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:35:53.914893 | orchestrator | 2025-06-02 17:35:53 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:35:53.915805 | orchestrator | 2025-06-02 17:35:53 | INFO  | Task 1dffcb9c-6b04-44b4-ab86-5cdc9108be47 is in state STARTED 2025-06-02 17:35:53.916786 | orchestrator | 2025-06-02 17:35:53 | INFO  | Task 0319920a-b556-4a67-bc17-7665313f7dc2 is in state STARTED 2025-06-02 17:35:53.916813 | orchestrator | 2025-06-02 17:35:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:35:56.956037 | orchestrator | 2025-06-02 17:35:56 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:35:56.957068 | orchestrator | 2025-06-02 17:35:56 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:35:56.960004 | orchestrator | 2025-06-02 17:35:56 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:35:56.962379 | orchestrator | 2025-06-02 17:35:56 | INFO  | Task 1dffcb9c-6b04-44b4-ab86-5cdc9108be47 is in state STARTED 2025-06-02 17:35:56.963458 | orchestrator | 2025-06-02 17:35:56 | INFO  | Task 0319920a-b556-4a67-bc17-7665313f7dc2 is in state STARTED 2025-06-02 17:35:56.969522 | orchestrator | 2025-06-02 17:35:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:36:00.002576 | orchestrator | 2025-06-02 17:36:00 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:36:00.003402 | orchestrator | 2025-06-02 17:36:00 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:36:00.004153 | orchestrator | 2025-06-02 17:36:00 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:36:00.005001 | orchestrator | 2025-06-02 17:36:00 | INFO  | Task 1dffcb9c-6b04-44b4-ab86-5cdc9108be47 is in state STARTED 2025-06-02 17:36:00.007681 | orchestrator | 2025-06-02 17:36:00 | INFO  | Task 0319920a-b556-4a67-bc17-7665313f7dc2 is in state STARTED 2025-06-02 17:36:00.007822 | orchestrator | 2025-06-02 17:36:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:36:03.049159 | orchestrator | 2025-06-02 17:36:03 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:36:03.049249 | orchestrator | 2025-06-02 17:36:03 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:36:03.052656 | orchestrator | 2025-06-02 17:36:03 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:36:03.059866 | orchestrator | 2025-06-02 17:36:03 | INFO  | Task 1dffcb9c-6b04-44b4-ab86-5cdc9108be47 is in state STARTED 2025-06-02 17:36:03.059963 | orchestrator | 2025-06-02 17:36:03 | INFO  | Task 0319920a-b556-4a67-bc17-7665313f7dc2 is in state STARTED 2025-06-02 17:36:03.059973 | orchestrator | 2025-06-02 17:36:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:36:06.089523 | orchestrator | 2025-06-02 17:36:06 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:36:06.089986 | orchestrator | 2025-06-02 17:36:06 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:36:06.091073 | orchestrator | 2025-06-02 17:36:06 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:36:06.093022 | orchestrator | 2025-06-02 17:36:06 | INFO  | Task 1dffcb9c-6b04-44b4-ab86-5cdc9108be47 is in state STARTED 2025-06-02 17:36:06.093720 | orchestrator | 2025-06-02 17:36:06 | INFO  | Task 0319920a-b556-4a67-bc17-7665313f7dc2 is in state STARTED 2025-06-02 17:36:06.093750 | orchestrator | 2025-06-02 17:36:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:36:09.135857 | orchestrator | 2025-06-02 17:36:09 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:36:09.141199 | orchestrator | 2025-06-02 17:36:09 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:36:09.141709 | orchestrator | 2025-06-02 17:36:09 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:36:09.143470 | orchestrator | 2025-06-02 17:36:09 | INFO  | Task 1dffcb9c-6b04-44b4-ab86-5cdc9108be47 is in state STARTED 2025-06-02 17:36:09.145096 | orchestrator | 2025-06-02 17:36:09 | INFO  | Task 0319920a-b556-4a67-bc17-7665313f7dc2 is in state STARTED 2025-06-02 17:36:09.145152 | orchestrator | 2025-06-02 17:36:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:36:12.185405 | orchestrator | 2025-06-02 17:36:12 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:36:12.187388 | orchestrator | 2025-06-02 17:36:12 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:36:12.190239 | orchestrator | 2025-06-02 17:36:12 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:36:12.190463 | orchestrator | 2025-06-02 17:36:12 | INFO  | Task 1dffcb9c-6b04-44b4-ab86-5cdc9108be47 is in state STARTED 2025-06-02 17:36:12.194527 | orchestrator | 2025-06-02 17:36:12 | INFO  | Task 0319920a-b556-4a67-bc17-7665313f7dc2 is in state STARTED 2025-06-02 17:36:12.194962 | orchestrator | 2025-06-02 17:36:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:36:15.228286 | orchestrator | 2025-06-02 17:36:15 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:36:15.228667 | orchestrator | 2025-06-02 17:36:15 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:36:15.229422 | orchestrator | 2025-06-02 17:36:15 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:36:15.230345 | orchestrator | 2025-06-02 17:36:15 | INFO  | Task 1dffcb9c-6b04-44b4-ab86-5cdc9108be47 is in state STARTED 2025-06-02 17:36:15.231101 | orchestrator | 2025-06-02 17:36:15 | INFO  | Task 0319920a-b556-4a67-bc17-7665313f7dc2 is in state STARTED 2025-06-02 17:36:15.231145 | orchestrator | 2025-06-02 17:36:15 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:36:18.286334 | orchestrator | 2025-06-02 17:36:18 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:36:18.286488 | orchestrator | 2025-06-02 17:36:18 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:36:18.286539 | orchestrator | 2025-06-02 17:36:18 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:36:18.286551 | orchestrator | 2025-06-02 17:36:18 | INFO  | Task 1dffcb9c-6b04-44b4-ab86-5cdc9108be47 is in state STARTED 2025-06-02 17:36:18.286562 | orchestrator | 2025-06-02 17:36:18 | INFO  | Task 0319920a-b556-4a67-bc17-7665313f7dc2 is in state STARTED 2025-06-02 17:36:18.286574 | orchestrator | 2025-06-02 17:36:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:36:21.322719 | orchestrator | 2025-06-02 17:36:21 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:36:21.323024 | orchestrator | 2025-06-02 17:36:21 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:36:21.327113 | orchestrator | 2025-06-02 17:36:21 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:36:21.327790 | orchestrator | 2025-06-02 17:36:21 | INFO  | Task 1dffcb9c-6b04-44b4-ab86-5cdc9108be47 is in state STARTED 2025-06-02 17:36:21.329071 | orchestrator | 2025-06-02 17:36:21 | INFO  | Task 0319920a-b556-4a67-bc17-7665313f7dc2 is in state STARTED 2025-06-02 17:36:21.329151 | orchestrator | 2025-06-02 17:36:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:36:24.373270 | orchestrator | 2025-06-02 17:36:24 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:36:24.373902 | orchestrator | 2025-06-02 17:36:24 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:36:24.377447 | orchestrator | 2025-06-02 17:36:24 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:36:24.379347 | orchestrator | 2025-06-02 17:36:24 | INFO  | Task 1dffcb9c-6b04-44b4-ab86-5cdc9108be47 is in state STARTED 2025-06-02 17:36:24.379644 | orchestrator | 2025-06-02 17:36:24 | INFO  | Task 0319920a-b556-4a67-bc17-7665313f7dc2 is in state STARTED 2025-06-02 17:36:24.379790 | orchestrator | 2025-06-02 17:36:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:36:27.434565 | orchestrator | 2025-06-02 17:36:27 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:36:27.436534 | orchestrator | 2025-06-02 17:36:27 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:36:27.442991 | orchestrator | 2025-06-02 17:36:27 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:36:27.445950 | orchestrator | 2025-06-02 17:36:27 | INFO  | Task 1dffcb9c-6b04-44b4-ab86-5cdc9108be47 is in state STARTED 2025-06-02 17:36:27.451345 | orchestrator | 2025-06-02 17:36:27 | INFO  | Task 0319920a-b556-4a67-bc17-7665313f7dc2 is in state STARTED 2025-06-02 17:36:27.451646 | orchestrator | 2025-06-02 17:36:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:36:30.487848 | orchestrator | 2025-06-02 17:36:30 | INFO  | Task fa190b7b-2c39-4ed7-87cf-6f92aaf82790 is in state STARTED 2025-06-02 17:36:30.488709 | orchestrator | 2025-06-02 17:36:30 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:36:30.490246 | orchestrator | 2025-06-02 17:36:30 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:36:30.493036 | orchestrator | 2025-06-02 17:36:30 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:36:30.493854 | orchestrator | 2025-06-02 17:36:30 | INFO  | Task 1dffcb9c-6b04-44b4-ab86-5cdc9108be47 is in state STARTED 2025-06-02 17:36:30.494912 | orchestrator | 2025-06-02 17:36:30 | INFO  | Task 0319920a-b556-4a67-bc17-7665313f7dc2 is in state SUCCESS 2025-06-02 17:36:30.496370 | orchestrator | 2025-06-02 17:36:30.496449 | orchestrator | 2025-06-02 17:36:30.496469 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 17:36:30.496487 | orchestrator | 2025-06-02 17:36:30.496503 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 17:36:30.496518 | orchestrator | Monday 02 June 2025 17:35:15 +0000 (0:00:00.540) 0:00:00.540 *********** 2025-06-02 17:36:30.496534 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:36:30.496547 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:36:30.496556 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:36:30.496564 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:36:30.496573 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:36:30.496607 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:36:30.496616 | orchestrator | 2025-06-02 17:36:30.496625 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 17:36:30.496634 | orchestrator | Monday 02 June 2025 17:35:17 +0000 (0:00:01.375) 0:00:01.916 *********** 2025-06-02 17:36:30.496643 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-02 17:36:30.496652 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-02 17:36:30.496660 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-02 17:36:30.496669 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-02 17:36:30.496678 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-02 17:36:30.496686 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-02 17:36:30.496695 | orchestrator | 2025-06-02 17:36:30.496703 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-06-02 17:36:30.496712 | orchestrator | 2025-06-02 17:36:30.496721 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-06-02 17:36:30.496729 | orchestrator | Monday 02 June 2025 17:35:18 +0000 (0:00:01.564) 0:00:03.480 *********** 2025-06-02 17:36:30.496760 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:36:30.496771 | orchestrator | 2025-06-02 17:36:30.496780 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-06-02 17:36:30.496788 | orchestrator | Monday 02 June 2025 17:35:21 +0000 (0:00:02.961) 0:00:06.442 *********** 2025-06-02 17:36:30.496797 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-06-02 17:36:30.496806 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-06-02 17:36:30.496815 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-06-02 17:36:30.496824 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-06-02 17:36:30.496849 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-06-02 17:36:30.496864 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-06-02 17:36:30.496879 | orchestrator | 2025-06-02 17:36:30.496894 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-06-02 17:36:30.496909 | orchestrator | Monday 02 June 2025 17:35:23 +0000 (0:00:02.095) 0:00:08.538 *********** 2025-06-02 17:36:30.496925 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-06-02 17:36:30.496940 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-06-02 17:36:30.496956 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-06-02 17:36:30.496967 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-06-02 17:36:30.496977 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-06-02 17:36:30.496986 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-06-02 17:36:30.496996 | orchestrator | 2025-06-02 17:36:30.497006 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-06-02 17:36:30.497016 | orchestrator | Monday 02 June 2025 17:35:26 +0000 (0:00:02.956) 0:00:11.494 *********** 2025-06-02 17:36:30.497037 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-06-02 17:36:30.497047 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:36:30.498443 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-06-02 17:36:30.498462 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:36:30.498471 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-06-02 17:36:30.498479 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:36:30.498488 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-06-02 17:36:30.498497 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:36:30.498506 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-06-02 17:36:30.498515 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:36:30.498523 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-06-02 17:36:30.498532 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:36:30.498540 | orchestrator | 2025-06-02 17:36:30.498549 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-06-02 17:36:30.498559 | orchestrator | Monday 02 June 2025 17:35:28 +0000 (0:00:01.905) 0:00:13.400 *********** 2025-06-02 17:36:30.498568 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:36:30.498576 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:36:30.498606 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:36:30.498615 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:36:30.498624 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:36:30.498633 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:36:30.498642 | orchestrator | 2025-06-02 17:36:30.498650 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-06-02 17:36:30.498659 | orchestrator | Monday 02 June 2025 17:35:30 +0000 (0:00:01.764) 0:00:15.164 *********** 2025-06-02 17:36:30.498705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 17:36:30.498719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 17:36:30.498730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 17:36:30.498761 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 17:36:30.498771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 17:36:30.498786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 17:36:30.498796 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 17:36:30.498805 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 17:36:30.498818 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 17:36:30.498834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 17:36:30.498843 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 17:36:30.498858 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 17:36:30.498868 | orchestrator | 2025-06-02 17:36:30.498877 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-06-02 17:36:30.498886 | orchestrator | Monday 02 June 2025 17:35:32 +0000 (0:00:02.112) 0:00:17.276 *********** 2025-06-02 17:36:30.498895 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 17:36:30.498905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 17:36:30.498923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 17:36:30.498933 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 17:36:30.498942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 17:36:30.498957 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 17:36:30.498967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 17:36:30.498987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 17:36:30.499009 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 17:36:30.499018 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 17:36:30.499033 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 17:36:30.499042 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 17:36:30.499051 | orchestrator | 2025-06-02 17:36:30.499060 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-06-02 17:36:30.499069 | orchestrator | Monday 02 June 2025 17:35:37 +0000 (0:00:05.089) 0:00:22.366 *********** 2025-06-02 17:36:30.499078 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:36:30.499087 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:36:30.499096 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:36:30.499110 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:36:30.499118 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:36:30.499127 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:36:30.499135 | orchestrator | 2025-06-02 17:36:30.499144 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-06-02 17:36:30.499153 | orchestrator | Monday 02 June 2025 17:35:38 +0000 (0:00:01.124) 0:00:23.490 *********** 2025-06-02 17:36:30.499165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 17:36:30.499175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 17:36:30.499184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 17:36:30.499198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 17:36:30.499208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 17:36:30.499229 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 17:36:30.499242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 17:36:30.499251 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 17:36:30.499260 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 17:36:30.499277 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 17:36:30.499286 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 17:36:30.499303 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 17:36:30.499312 | orchestrator | 2025-06-02 17:36:30.499321 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-02 17:36:30.499330 | orchestrator | Monday 02 June 2025 17:35:42 +0000 (0:00:03.882) 0:00:27.372 *********** 2025-06-02 17:36:30.499338 | orchestrator | 2025-06-02 17:36:30.499352 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-02 17:36:30.499361 | orchestrator | Monday 02 June 2025 17:35:42 +0000 (0:00:00.156) 0:00:27.529 *********** 2025-06-02 17:36:30.499369 | orchestrator | 2025-06-02 17:36:30.499378 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-02 17:36:30.499386 | orchestrator | Monday 02 June 2025 17:35:42 +0000 (0:00:00.159) 0:00:27.688 *********** 2025-06-02 17:36:30.499395 | orchestrator | 2025-06-02 17:36:30.499404 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-02 17:36:30.499412 | orchestrator | Monday 02 June 2025 17:35:42 +0000 (0:00:00.167) 0:00:27.856 *********** 2025-06-02 17:36:30.499421 | orchestrator | 2025-06-02 17:36:30.499430 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-02 17:36:30.499438 | orchestrator | Monday 02 June 2025 17:35:43 +0000 (0:00:00.162) 0:00:28.018 *********** 2025-06-02 17:36:30.499447 | orchestrator | 2025-06-02 17:36:30.499456 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-02 17:36:30.499464 | orchestrator | Monday 02 June 2025 17:35:43 +0000 (0:00:00.213) 0:00:28.231 *********** 2025-06-02 17:36:30.499473 | orchestrator | 2025-06-02 17:36:30.499482 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-06-02 17:36:30.499490 | orchestrator | Monday 02 June 2025 17:35:43 +0000 (0:00:00.438) 0:00:28.669 *********** 2025-06-02 17:36:30.499499 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:36:30.499507 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:36:30.499516 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:36:30.499525 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:36:30.499533 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:36:30.499542 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:36:30.499551 | orchestrator | 2025-06-02 17:36:30.499560 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-06-02 17:36:30.499569 | orchestrator | Monday 02 June 2025 17:35:54 +0000 (0:00:10.448) 0:00:39.118 *********** 2025-06-02 17:36:30.499578 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:36:30.499661 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:36:30.499670 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:36:30.499678 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:36:30.499687 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:36:30.499695 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:36:30.499704 | orchestrator | 2025-06-02 17:36:30.499713 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-06-02 17:36:30.499729 | orchestrator | Monday 02 June 2025 17:35:56 +0000 (0:00:02.096) 0:00:41.215 *********** 2025-06-02 17:36:30.499737 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:36:30.499746 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:36:30.499760 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:36:30.499775 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:36:30.499791 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:36:30.499806 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:36:30.499821 | orchestrator | 2025-06-02 17:36:30.499836 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-06-02 17:36:30.499851 | orchestrator | Monday 02 June 2025 17:36:05 +0000 (0:00:08.990) 0:00:50.205 *********** 2025-06-02 17:36:30.499867 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-06-02 17:36:30.499876 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-06-02 17:36:30.499885 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-06-02 17:36:30.499894 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-06-02 17:36:30.499903 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-06-02 17:36:30.499911 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-06-02 17:36:30.499920 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-06-02 17:36:30.499928 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-06-02 17:36:30.499937 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-06-02 17:36:30.499946 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-06-02 17:36:30.499954 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-06-02 17:36:30.499963 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-06-02 17:36:30.499972 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-02 17:36:30.499980 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-02 17:36:30.499989 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-02 17:36:30.499997 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-02 17:36:30.500010 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-02 17:36:30.500019 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-02 17:36:30.500028 | orchestrator | 2025-06-02 17:36:30.500049 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-06-02 17:36:30.500068 | orchestrator | Monday 02 June 2025 17:36:12 +0000 (0:00:07.601) 0:00:57.806 *********** 2025-06-02 17:36:30.500077 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-06-02 17:36:30.500085 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:36:30.500094 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-06-02 17:36:30.500103 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:36:30.500112 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-06-02 17:36:30.500127 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:36:30.500135 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-06-02 17:36:30.500144 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-06-02 17:36:30.500153 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-06-02 17:36:30.500162 | orchestrator | 2025-06-02 17:36:30.500170 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-06-02 17:36:30.500179 | orchestrator | Monday 02 June 2025 17:36:15 +0000 (0:00:02.550) 0:01:00.357 *********** 2025-06-02 17:36:30.500188 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-06-02 17:36:30.500197 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:36:30.500206 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-06-02 17:36:30.500214 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:36:30.500223 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-06-02 17:36:30.500231 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:36:30.500240 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-06-02 17:36:30.500249 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-06-02 17:36:30.500257 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-06-02 17:36:30.500266 | orchestrator | 2025-06-02 17:36:30.500274 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-06-02 17:36:30.500283 | orchestrator | Monday 02 June 2025 17:36:19 +0000 (0:00:03.943) 0:01:04.300 *********** 2025-06-02 17:36:30.500292 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:36:30.500300 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:36:30.500309 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:36:30.500317 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:36:30.500327 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:36:30.500336 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:36:30.500346 | orchestrator | 2025-06-02 17:36:30.500355 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:36:30.500366 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 17:36:30.500382 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 17:36:30.500392 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 17:36:30.500402 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 17:36:30.500412 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 17:36:30.500421 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 17:36:30.500431 | orchestrator | 2025-06-02 17:36:30.500441 | orchestrator | 2025-06-02 17:36:30.500450 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:36:30.500460 | orchestrator | Monday 02 June 2025 17:36:27 +0000 (0:00:08.158) 0:01:12.459 *********** 2025-06-02 17:36:30.500470 | orchestrator | =============================================================================== 2025-06-02 17:36:30.500479 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 17.15s 2025-06-02 17:36:30.500489 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.45s 2025-06-02 17:36:30.500499 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.60s 2025-06-02 17:36:30.500508 | orchestrator | openvswitch : Copying over config.json files for services --------------- 5.09s 2025-06-02 17:36:30.500528 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.94s 2025-06-02 17:36:30.500538 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.88s 2025-06-02 17:36:30.500548 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.96s 2025-06-02 17:36:30.500557 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.96s 2025-06-02 17:36:30.500567 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.55s 2025-06-02 17:36:30.500576 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.11s 2025-06-02 17:36:30.500605 | orchestrator | module-load : Load modules ---------------------------------------------- 2.09s 2025-06-02 17:36:30.500615 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.09s 2025-06-02 17:36:30.500629 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.91s 2025-06-02 17:36:30.500639 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.76s 2025-06-02 17:36:30.500649 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.56s 2025-06-02 17:36:30.500659 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.38s 2025-06-02 17:36:30.500668 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.30s 2025-06-02 17:36:30.500678 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.12s 2025-06-02 17:36:30.500687 | orchestrator | 2025-06-02 17:36:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:36:33.523888 | orchestrator | 2025-06-02 17:36:33 | INFO  | Task fa190b7b-2c39-4ed7-87cf-6f92aaf82790 is in state STARTED 2025-06-02 17:36:33.524187 | orchestrator | 2025-06-02 17:36:33 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:36:33.525094 | orchestrator | 2025-06-02 17:36:33 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:36:33.528673 | orchestrator | 2025-06-02 17:36:33 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:36:33.529353 | orchestrator | 2025-06-02 17:36:33 | INFO  | Task 1dffcb9c-6b04-44b4-ab86-5cdc9108be47 is in state STARTED 2025-06-02 17:36:33.529375 | orchestrator | 2025-06-02 17:36:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:36:36.560735 | orchestrator | 2025-06-02 17:36:36 | INFO  | Task fa190b7b-2c39-4ed7-87cf-6f92aaf82790 is in state STARTED 2025-06-02 17:36:36.561499 | orchestrator | 2025-06-02 17:36:36 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:36:36.562119 | orchestrator | 2025-06-02 17:36:36 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:36:36.562889 | orchestrator | 2025-06-02 17:36:36 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:36:36.564936 | orchestrator | 2025-06-02 17:36:36 | INFO  | Task 1dffcb9c-6b04-44b4-ab86-5cdc9108be47 is in state STARTED 2025-06-02 17:36:36.564977 | orchestrator | 2025-06-02 17:36:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:36:39.615105 | orchestrator | 2025-06-02 17:36:39 | INFO  | Task fa190b7b-2c39-4ed7-87cf-6f92aaf82790 is in state STARTED 2025-06-02 17:36:39.615774 | orchestrator | 2025-06-02 17:36:39 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:36:39.618903 | orchestrator | 2025-06-02 17:36:39 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:36:39.620035 | orchestrator | 2025-06-02 17:36:39 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:36:39.621822 | orchestrator | 2025-06-02 17:36:39 | INFO  | Task 1dffcb9c-6b04-44b4-ab86-5cdc9108be47 is in state STARTED 2025-06-02 17:36:39.621895 | orchestrator | 2025-06-02 17:36:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:36:42.667234 | orchestrator | 2025-06-02 17:36:42 | INFO  | Task fa190b7b-2c39-4ed7-87cf-6f92aaf82790 is in state STARTED 2025-06-02 17:36:42.667330 | orchestrator | 2025-06-02 17:36:42 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:36:42.667759 | orchestrator | 2025-06-02 17:36:42 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:36:42.668312 | orchestrator | 2025-06-02 17:36:42 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:36:42.669054 | orchestrator | 2025-06-02 17:36:42 | INFO  | Task 1dffcb9c-6b04-44b4-ab86-5cdc9108be47 is in state STARTED 2025-06-02 17:36:42.669085 | orchestrator | 2025-06-02 17:36:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:36:45.707290 | orchestrator | 2025-06-02 17:36:45 | INFO  | Task fa190b7b-2c39-4ed7-87cf-6f92aaf82790 is in state STARTED 2025-06-02 17:36:45.707436 | orchestrator | 2025-06-02 17:36:45 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:36:45.708562 | orchestrator | 2025-06-02 17:36:45 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:36:45.709316 | orchestrator | 2025-06-02 17:36:45 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:36:45.710350 | orchestrator | 2025-06-02 17:36:45 | INFO  | Task 1dffcb9c-6b04-44b4-ab86-5cdc9108be47 is in state STARTED 2025-06-02 17:36:45.710384 | orchestrator | 2025-06-02 17:36:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:36:48.773700 | orchestrator | 2025-06-02 17:36:48 | INFO  | Task fa190b7b-2c39-4ed7-87cf-6f92aaf82790 is in state STARTED 2025-06-02 17:36:48.775162 | orchestrator | 2025-06-02 17:36:48 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:36:48.776505 | orchestrator | 2025-06-02 17:36:48 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:36:48.780322 | orchestrator | 2025-06-02 17:36:48 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:36:48.782534 | orchestrator | 2025-06-02 17:36:48 | INFO  | Task 1dffcb9c-6b04-44b4-ab86-5cdc9108be47 is in state STARTED 2025-06-02 17:36:48.782567 | orchestrator | 2025-06-02 17:36:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:36:51.831514 | orchestrator | 2025-06-02 17:36:51 | INFO  | Task fa190b7b-2c39-4ed7-87cf-6f92aaf82790 is in state STARTED 2025-06-02 17:36:51.832183 | orchestrator | 2025-06-02 17:36:51 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:36:51.833146 | orchestrator | 2025-06-02 17:36:51 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:36:51.833735 | orchestrator | 2025-06-02 17:36:51 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:36:51.835188 | orchestrator | 2025-06-02 17:36:51 | INFO  | Task 1dffcb9c-6b04-44b4-ab86-5cdc9108be47 is in state STARTED 2025-06-02 17:36:51.835217 | orchestrator | 2025-06-02 17:36:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:36:54.879809 | orchestrator | 2025-06-02 17:36:54 | INFO  | Task fa190b7b-2c39-4ed7-87cf-6f92aaf82790 is in state STARTED 2025-06-02 17:36:54.880670 | orchestrator | 2025-06-02 17:36:54 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:36:54.881258 | orchestrator | 2025-06-02 17:36:54 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:36:54.884573 | orchestrator | 2025-06-02 17:36:54 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:36:54.886681 | orchestrator | 2025-06-02 17:36:54 | INFO  | Task 1dffcb9c-6b04-44b4-ab86-5cdc9108be47 is in state STARTED 2025-06-02 17:36:54.886731 | orchestrator | 2025-06-02 17:36:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:36:57.939288 | orchestrator | 2025-06-02 17:36:57 | INFO  | Task fa190b7b-2c39-4ed7-87cf-6f92aaf82790 is in state STARTED 2025-06-02 17:36:57.945875 | orchestrator | 2025-06-02 17:36:57 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:36:57.945971 | orchestrator | 2025-06-02 17:36:57 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:36:57.945981 | orchestrator | 2025-06-02 17:36:57 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:36:57.946531 | orchestrator | 2025-06-02 17:36:57 | INFO  | Task 1dffcb9c-6b04-44b4-ab86-5cdc9108be47 is in state STARTED 2025-06-02 17:36:57.946550 | orchestrator | 2025-06-02 17:36:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:37:00.985157 | orchestrator | 2025-06-02 17:37:00 | INFO  | Task fa190b7b-2c39-4ed7-87cf-6f92aaf82790 is in state STARTED 2025-06-02 17:37:00.986981 | orchestrator | 2025-06-02 17:37:00 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:37:00.987811 | orchestrator | 2025-06-02 17:37:00 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:37:00.990922 | orchestrator | 2025-06-02 17:37:00 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:37:00.997021 | orchestrator | 2025-06-02 17:37:00 | INFO  | Task 1dffcb9c-6b04-44b4-ab86-5cdc9108be47 is in state STARTED 2025-06-02 17:37:00.997098 | orchestrator | 2025-06-02 17:37:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:37:04.056006 | orchestrator | 2025-06-02 17:37:04 | INFO  | Task fa190b7b-2c39-4ed7-87cf-6f92aaf82790 is in state STARTED 2025-06-02 17:37:04.058498 | orchestrator | 2025-06-02 17:37:04 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:37:04.061036 | orchestrator | 2025-06-02 17:37:04 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:37:04.063235 | orchestrator | 2025-06-02 17:37:04 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:37:04.066170 | orchestrator | 2025-06-02 17:37:04 | INFO  | Task 1dffcb9c-6b04-44b4-ab86-5cdc9108be47 is in state STARTED 2025-06-02 17:37:04.066237 | orchestrator | 2025-06-02 17:37:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:37:07.102262 | orchestrator | 2025-06-02 17:37:07 | INFO  | Task fa190b7b-2c39-4ed7-87cf-6f92aaf82790 is in state STARTED 2025-06-02 17:37:07.106412 | orchestrator | 2025-06-02 17:37:07 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:37:07.107967 | orchestrator | 2025-06-02 17:37:07 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:37:07.110392 | orchestrator | 2025-06-02 17:37:07 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:37:07.112130 | orchestrator | 2025-06-02 17:37:07 | INFO  | Task 1dffcb9c-6b04-44b4-ab86-5cdc9108be47 is in state STARTED 2025-06-02 17:37:07.112182 | orchestrator | 2025-06-02 17:37:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:37:10.152918 | orchestrator | 2025-06-02 17:37:10 | INFO  | Task fa190b7b-2c39-4ed7-87cf-6f92aaf82790 is in state STARTED 2025-06-02 17:37:10.154359 | orchestrator | 2025-06-02 17:37:10 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:37:10.155037 | orchestrator | 2025-06-02 17:37:10 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:37:10.156159 | orchestrator | 2025-06-02 17:37:10 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:37:10.157797 | orchestrator | 2025-06-02 17:37:10 | INFO  | Task 1dffcb9c-6b04-44b4-ab86-5cdc9108be47 is in state STARTED 2025-06-02 17:37:10.157846 | orchestrator | 2025-06-02 17:37:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:37:13.210747 | orchestrator | 2025-06-02 17:37:13 | INFO  | Task fa190b7b-2c39-4ed7-87cf-6f92aaf82790 is in state STARTED 2025-06-02 17:37:13.211092 | orchestrator | 2025-06-02 17:37:13 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:37:13.211983 | orchestrator | 2025-06-02 17:37:13 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:37:13.212901 | orchestrator | 2025-06-02 17:37:13 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:37:13.213376 | orchestrator | 2025-06-02 17:37:13 | INFO  | Task 1dffcb9c-6b04-44b4-ab86-5cdc9108be47 is in state STARTED 2025-06-02 17:37:13.214463 | orchestrator | 2025-06-02 17:37:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:37:16.255375 | orchestrator | 2025-06-02 17:37:16 | INFO  | Task fa190b7b-2c39-4ed7-87cf-6f92aaf82790 is in state STARTED 2025-06-02 17:37:16.258173 | orchestrator | 2025-06-02 17:37:16 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:37:16.260938 | orchestrator | 2025-06-02 17:37:16 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:37:16.263354 | orchestrator | 2025-06-02 17:37:16 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:37:16.267367 | orchestrator | 2025-06-02 17:37:16 | INFO  | Task 1dffcb9c-6b04-44b4-ab86-5cdc9108be47 is in state STARTED 2025-06-02 17:37:16.267428 | orchestrator | 2025-06-02 17:37:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:37:19.317518 | orchestrator | 2025-06-02 17:37:19 | INFO  | Task fa190b7b-2c39-4ed7-87cf-6f92aaf82790 is in state STARTED 2025-06-02 17:37:19.319663 | orchestrator | 2025-06-02 17:37:19 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:37:19.323359 | orchestrator | 2025-06-02 17:37:19 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:37:19.324567 | orchestrator | 2025-06-02 17:37:19 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:37:19.326677 | orchestrator | 2025-06-02 17:37:19 | INFO  | Task 1dffcb9c-6b04-44b4-ab86-5cdc9108be47 is in state STARTED 2025-06-02 17:37:19.326758 | orchestrator | 2025-06-02 17:37:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:37:22.373115 | orchestrator | 2025-06-02 17:37:22 | INFO  | Task fa190b7b-2c39-4ed7-87cf-6f92aaf82790 is in state STARTED 2025-06-02 17:37:22.376137 | orchestrator | 2025-06-02 17:37:22 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:37:22.378779 | orchestrator | 2025-06-02 17:37:22 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:37:22.380174 | orchestrator | 2025-06-02 17:37:22 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:37:22.381655 | orchestrator | 2025-06-02 17:37:22 | INFO  | Task 1dffcb9c-6b04-44b4-ab86-5cdc9108be47 is in state STARTED 2025-06-02 17:37:22.381722 | orchestrator | 2025-06-02 17:37:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:37:25.434767 | orchestrator | 2025-06-02 17:37:25 | INFO  | Task fa190b7b-2c39-4ed7-87cf-6f92aaf82790 is in state STARTED 2025-06-02 17:37:25.437459 | orchestrator | 2025-06-02 17:37:25 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:37:25.441771 | orchestrator | 2025-06-02 17:37:25 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:37:25.444433 | orchestrator | 2025-06-02 17:37:25 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:37:25.445605 | orchestrator | 2025-06-02 17:37:25 | INFO  | Task 1dffcb9c-6b04-44b4-ab86-5cdc9108be47 is in state STARTED 2025-06-02 17:37:25.445893 | orchestrator | 2025-06-02 17:37:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:37:28.499780 | orchestrator | 2025-06-02 17:37:28 | INFO  | Task fa190b7b-2c39-4ed7-87cf-6f92aaf82790 is in state STARTED 2025-06-02 17:37:28.501821 | orchestrator | 2025-06-02 17:37:28 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:37:28.504423 | orchestrator | 2025-06-02 17:37:28 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:37:28.506405 | orchestrator | 2025-06-02 17:37:28 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:37:28.508641 | orchestrator | 2025-06-02 17:37:28 | INFO  | Task 1dffcb9c-6b04-44b4-ab86-5cdc9108be47 is in state STARTED 2025-06-02 17:37:28.508671 | orchestrator | 2025-06-02 17:37:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:37:31.555998 | orchestrator | 2025-06-02 17:37:31 | INFO  | Task fa190b7b-2c39-4ed7-87cf-6f92aaf82790 is in state STARTED 2025-06-02 17:37:31.556927 | orchestrator | 2025-06-02 17:37:31 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:37:31.558672 | orchestrator | 2025-06-02 17:37:31 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:37:31.560475 | orchestrator | 2025-06-02 17:37:31 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:37:31.562374 | orchestrator | 2025-06-02 17:37:31 | INFO  | Task 1dffcb9c-6b04-44b4-ab86-5cdc9108be47 is in state STARTED 2025-06-02 17:37:31.562431 | orchestrator | 2025-06-02 17:37:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:37:34.628306 | orchestrator | 2025-06-02 17:37:34 | INFO  | Task fa190b7b-2c39-4ed7-87cf-6f92aaf82790 is in state STARTED 2025-06-02 17:37:34.632377 | orchestrator | 2025-06-02 17:37:34 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:37:34.634990 | orchestrator | 2025-06-02 17:37:34 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:37:34.637297 | orchestrator | 2025-06-02 17:37:34 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:37:34.640692 | orchestrator | 2025-06-02 17:37:34 | INFO  | Task 1dffcb9c-6b04-44b4-ab86-5cdc9108be47 is in state STARTED 2025-06-02 17:37:34.640751 | orchestrator | 2025-06-02 17:37:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:37:37.680450 | orchestrator | 2025-06-02 17:37:37 | INFO  | Task fa190b7b-2c39-4ed7-87cf-6f92aaf82790 is in state STARTED 2025-06-02 17:37:37.681097 | orchestrator | 2025-06-02 17:37:37 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:37:37.681921 | orchestrator | 2025-06-02 17:37:37 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:37:37.682296 | orchestrator | 2025-06-02 17:37:37 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:37:37.683174 | orchestrator | 2025-06-02 17:37:37 | INFO  | Task 1dffcb9c-6b04-44b4-ab86-5cdc9108be47 is in state STARTED 2025-06-02 17:37:37.683212 | orchestrator | 2025-06-02 17:37:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:37:40.739185 | orchestrator | 2025-06-02 17:37:40 | INFO  | Task fa190b7b-2c39-4ed7-87cf-6f92aaf82790 is in state STARTED 2025-06-02 17:37:40.739412 | orchestrator | 2025-06-02 17:37:40 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:37:40.741550 | orchestrator | 2025-06-02 17:37:40 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:37:40.743322 | orchestrator | 2025-06-02 17:37:40 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:37:40.745505 | orchestrator | 2025-06-02 17:37:40 | INFO  | Task 1dffcb9c-6b04-44b4-ab86-5cdc9108be47 is in state STARTED 2025-06-02 17:37:40.745555 | orchestrator | 2025-06-02 17:37:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:37:43.793424 | orchestrator | 2025-06-02 17:37:43 | INFO  | Task fa190b7b-2c39-4ed7-87cf-6f92aaf82790 is in state STARTED 2025-06-02 17:37:43.794904 | orchestrator | 2025-06-02 17:37:43 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:37:43.795872 | orchestrator | 2025-06-02 17:37:43 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:37:43.797231 | orchestrator | 2025-06-02 17:37:43 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:37:43.798297 | orchestrator | 2025-06-02 17:37:43 | INFO  | Task 1dffcb9c-6b04-44b4-ab86-5cdc9108be47 is in state STARTED 2025-06-02 17:37:43.798383 | orchestrator | 2025-06-02 17:37:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:37:46.863762 | orchestrator | 2025-06-02 17:37:46 | INFO  | Task fa190b7b-2c39-4ed7-87cf-6f92aaf82790 is in state STARTED 2025-06-02 17:37:46.864786 | orchestrator | 2025-06-02 17:37:46 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:37:46.865678 | orchestrator | 2025-06-02 17:37:46 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:37:46.867101 | orchestrator | 2025-06-02 17:37:46 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:37:46.870974 | orchestrator | 2025-06-02 17:37:46 | INFO  | Task 1dffcb9c-6b04-44b4-ab86-5cdc9108be47 is in state STARTED 2025-06-02 17:37:46.871073 | orchestrator | 2025-06-02 17:37:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:37:49.913879 | orchestrator | 2025-06-02 17:37:49 | INFO  | Task fa190b7b-2c39-4ed7-87cf-6f92aaf82790 is in state STARTED 2025-06-02 17:37:49.913994 | orchestrator | 2025-06-02 17:37:49 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:37:49.914860 | orchestrator | 2025-06-02 17:37:49 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:37:49.916327 | orchestrator | 2025-06-02 17:37:49 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:37:49.916917 | orchestrator | 2025-06-02 17:37:49 | INFO  | Task 1dffcb9c-6b04-44b4-ab86-5cdc9108be47 is in state STARTED 2025-06-02 17:37:49.916991 | orchestrator | 2025-06-02 17:37:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:37:52.964290 | orchestrator | 2025-06-02 17:37:52 | INFO  | Task fa190b7b-2c39-4ed7-87cf-6f92aaf82790 is in state STARTED 2025-06-02 17:37:52.964703 | orchestrator | 2025-06-02 17:37:52 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:37:52.966208 | orchestrator | 2025-06-02 17:37:52 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:37:52.967134 | orchestrator | 2025-06-02 17:37:52 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:37:52.970085 | orchestrator | 2025-06-02 17:37:52 | INFO  | Task 1dffcb9c-6b04-44b4-ab86-5cdc9108be47 is in state STARTED 2025-06-02 17:37:52.970200 | orchestrator | 2025-06-02 17:37:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:37:56.023213 | orchestrator | 2025-06-02 17:37:56 | INFO  | Task fa190b7b-2c39-4ed7-87cf-6f92aaf82790 is in state STARTED 2025-06-02 17:37:56.023435 | orchestrator | 2025-06-02 17:37:56 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:37:56.025271 | orchestrator | 2025-06-02 17:37:56 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:37:56.026250 | orchestrator | 2025-06-02 17:37:56 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:37:56.027219 | orchestrator | 2025-06-02 17:37:56 | INFO  | Task 1dffcb9c-6b04-44b4-ab86-5cdc9108be47 is in state STARTED 2025-06-02 17:37:56.028371 | orchestrator | 2025-06-02 17:37:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:37:59.065111 | orchestrator | 2025-06-02 17:37:59 | INFO  | Task fa190b7b-2c39-4ed7-87cf-6f92aaf82790 is in state STARTED 2025-06-02 17:37:59.065367 | orchestrator | 2025-06-02 17:37:59 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:37:59.065999 | orchestrator | 2025-06-02 17:37:59 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:37:59.069653 | orchestrator | 2025-06-02 17:37:59 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:37:59.075017 | orchestrator | 2025-06-02 17:37:59 | INFO  | Task 1dffcb9c-6b04-44b4-ab86-5cdc9108be47 is in state STARTED 2025-06-02 17:37:59.075076 | orchestrator | 2025-06-02 17:37:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:38:02.105237 | orchestrator | 2025-06-02 17:38:02 | INFO  | Task fa190b7b-2c39-4ed7-87cf-6f92aaf82790 is in state STARTED 2025-06-02 17:38:02.107893 | orchestrator | 2025-06-02 17:38:02 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:38:02.108228 | orchestrator | 2025-06-02 17:38:02 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:38:02.108810 | orchestrator | 2025-06-02 17:38:02 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:38:02.109416 | orchestrator | 2025-06-02 17:38:02 | INFO  | Task 1dffcb9c-6b04-44b4-ab86-5cdc9108be47 is in state STARTED 2025-06-02 17:38:02.109444 | orchestrator | 2025-06-02 17:38:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:38:05.151395 | orchestrator | 2025-06-02 17:38:05 | INFO  | Task fa190b7b-2c39-4ed7-87cf-6f92aaf82790 is in state STARTED 2025-06-02 17:38:05.151739 | orchestrator | 2025-06-02 17:38:05 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:38:05.153858 | orchestrator | 2025-06-02 17:38:05 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:38:05.154259 | orchestrator | 2025-06-02 17:38:05 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state STARTED 2025-06-02 17:38:05.155181 | orchestrator | 2025-06-02 17:38:05 | INFO  | Task 1dffcb9c-6b04-44b4-ab86-5cdc9108be47 is in state SUCCESS 2025-06-02 17:38:05.156103 | orchestrator | 2025-06-02 17:38:05.156137 | orchestrator | 2025-06-02 17:38:05.156146 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-06-02 17:38:05.156177 | orchestrator | 2025-06-02 17:38:05.156184 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-06-02 17:38:05.156190 | orchestrator | Monday 02 June 2025 17:35:42 +0000 (0:00:00.105) 0:00:00.105 *********** 2025-06-02 17:38:05.156195 | orchestrator | ok: [localhost] => { 2025-06-02 17:38:05.156204 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-06-02 17:38:05.156210 | orchestrator | } 2025-06-02 17:38:05.156216 | orchestrator | 2025-06-02 17:38:05.156222 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-06-02 17:38:05.156228 | orchestrator | Monday 02 June 2025 17:35:42 +0000 (0:00:00.055) 0:00:00.160 *********** 2025-06-02 17:38:05.156235 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-06-02 17:38:05.156243 | orchestrator | ...ignoring 2025-06-02 17:38:05.156249 | orchestrator | 2025-06-02 17:38:05.156255 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-06-02 17:38:05.156262 | orchestrator | Monday 02 June 2025 17:35:45 +0000 (0:00:03.153) 0:00:03.314 *********** 2025-06-02 17:38:05.156269 | orchestrator | skipping: [localhost] 2025-06-02 17:38:05.156275 | orchestrator | 2025-06-02 17:38:05.156281 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-06-02 17:38:05.156286 | orchestrator | Monday 02 June 2025 17:35:45 +0000 (0:00:00.057) 0:00:03.372 *********** 2025-06-02 17:38:05.156292 | orchestrator | ok: [localhost] 2025-06-02 17:38:05.156298 | orchestrator | 2025-06-02 17:38:05.156304 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 17:38:05.156309 | orchestrator | 2025-06-02 17:38:05.156315 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 17:38:05.156320 | orchestrator | Monday 02 June 2025 17:35:45 +0000 (0:00:00.164) 0:00:03.536 *********** 2025-06-02 17:38:05.156326 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:38:05.156333 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:38:05.156339 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:38:05.156346 | orchestrator | 2025-06-02 17:38:05.156351 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 17:38:05.156357 | orchestrator | Monday 02 June 2025 17:35:46 +0000 (0:00:00.315) 0:00:03.852 *********** 2025-06-02 17:38:05.156364 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-06-02 17:38:05.156371 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-06-02 17:38:05.156377 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-06-02 17:38:05.156383 | orchestrator | 2025-06-02 17:38:05.156389 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-06-02 17:38:05.156394 | orchestrator | 2025-06-02 17:38:05.156400 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-02 17:38:05.156406 | orchestrator | Monday 02 June 2025 17:35:46 +0000 (0:00:00.491) 0:00:04.343 *********** 2025-06-02 17:38:05.156413 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:38:05.156419 | orchestrator | 2025-06-02 17:38:05.156426 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-06-02 17:38:05.156432 | orchestrator | Monday 02 June 2025 17:35:47 +0000 (0:00:00.796) 0:00:05.140 *********** 2025-06-02 17:38:05.156438 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:38:05.156444 | orchestrator | 2025-06-02 17:38:05.156450 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-06-02 17:38:05.156457 | orchestrator | Monday 02 June 2025 17:35:48 +0000 (0:00:00.997) 0:00:06.138 *********** 2025-06-02 17:38:05.156463 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:05.156470 | orchestrator | 2025-06-02 17:38:05.156476 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-06-02 17:38:05.156482 | orchestrator | Monday 02 June 2025 17:35:48 +0000 (0:00:00.413) 0:00:06.551 *********** 2025-06-02 17:38:05.156497 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:05.156503 | orchestrator | 2025-06-02 17:38:05.156509 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-06-02 17:38:05.156515 | orchestrator | Monday 02 June 2025 17:35:49 +0000 (0:00:00.394) 0:00:06.946 *********** 2025-06-02 17:38:05.156522 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:05.156528 | orchestrator | 2025-06-02 17:38:05.156534 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-06-02 17:38:05.156540 | orchestrator | Monday 02 June 2025 17:35:49 +0000 (0:00:00.382) 0:00:07.328 *********** 2025-06-02 17:38:05.156546 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:05.156649 | orchestrator | 2025-06-02 17:38:05.156656 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-02 17:38:05.156662 | orchestrator | Monday 02 June 2025 17:35:50 +0000 (0:00:00.626) 0:00:07.955 *********** 2025-06-02 17:38:05.156669 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:38:05.156675 | orchestrator | 2025-06-02 17:38:05.156682 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-06-02 17:38:05.156688 | orchestrator | Monday 02 June 2025 17:35:51 +0000 (0:00:01.533) 0:00:09.489 *********** 2025-06-02 17:38:05.156694 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:38:05.156700 | orchestrator | 2025-06-02 17:38:05.156706 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-06-02 17:38:05.156712 | orchestrator | Monday 02 June 2025 17:35:52 +0000 (0:00:00.990) 0:00:10.480 *********** 2025-06-02 17:38:05.156719 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:05.156725 | orchestrator | 2025-06-02 17:38:05.156732 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-06-02 17:38:05.156739 | orchestrator | Monday 02 June 2025 17:35:53 +0000 (0:00:00.589) 0:00:11.070 *********** 2025-06-02 17:38:05.156746 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:05.156753 | orchestrator | 2025-06-02 17:38:05.156771 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-06-02 17:38:05.156778 | orchestrator | Monday 02 June 2025 17:35:53 +0000 (0:00:00.672) 0:00:11.742 *********** 2025-06-02 17:38:05.156790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 17:38:05.158819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 17:38:05.158874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 17:38:05.158880 | orchestrator | 2025-06-02 17:38:05.158885 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-06-02 17:38:05.158890 | orchestrator | Monday 02 June 2025 17:35:55 +0000 (0:00:01.609) 0:00:13.356 *********** 2025-06-02 17:38:05.158907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 17:38:05.158912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 17:38:05.158919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 17:38:05.158926 | orchestrator | 2025-06-02 17:38:05.158930 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-06-02 17:38:05.158934 | orchestrator | Monday 02 June 2025 17:35:58 +0000 (0:00:03.003) 0:00:16.359 *********** 2025-06-02 17:38:05.158938 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-02 17:38:05.158943 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-02 17:38:05.158947 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-02 17:38:05.158951 | orchestrator | 2025-06-02 17:38:05.158954 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-06-02 17:38:05.158958 | orchestrator | Monday 02 June 2025 17:36:00 +0000 (0:00:01.825) 0:00:18.185 *********** 2025-06-02 17:38:05.158962 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-02 17:38:05.158967 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-02 17:38:05.158971 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-02 17:38:05.158974 | orchestrator | 2025-06-02 17:38:05.158978 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-06-02 17:38:05.158982 | orchestrator | Monday 02 June 2025 17:36:02 +0000 (0:00:02.179) 0:00:20.365 *********** 2025-06-02 17:38:05.158986 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-02 17:38:05.158990 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-02 17:38:05.158993 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-02 17:38:05.158997 | orchestrator | 2025-06-02 17:38:05.159005 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-06-02 17:38:05.159010 | orchestrator | Monday 02 June 2025 17:36:03 +0000 (0:00:01.421) 0:00:21.786 *********** 2025-06-02 17:38:05.159016 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-02 17:38:05.159021 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-02 17:38:05.159027 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-02 17:38:05.159033 | orchestrator | 2025-06-02 17:38:05.159039 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-06-02 17:38:05.159044 | orchestrator | Monday 02 June 2025 17:36:06 +0000 (0:00:02.080) 0:00:23.866 *********** 2025-06-02 17:38:05.159050 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-02 17:38:05.159056 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-02 17:38:05.159061 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-02 17:38:05.159073 | orchestrator | 2025-06-02 17:38:05.159079 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-06-02 17:38:05.159085 | orchestrator | Monday 02 June 2025 17:36:07 +0000 (0:00:01.561) 0:00:25.428 *********** 2025-06-02 17:38:05.159090 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-02 17:38:05.159097 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-02 17:38:05.159102 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-02 17:38:05.159108 | orchestrator | 2025-06-02 17:38:05.159114 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-02 17:38:05.159120 | orchestrator | Monday 02 June 2025 17:36:09 +0000 (0:00:01.814) 0:00:27.243 *********** 2025-06-02 17:38:05.159126 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:05.159132 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:38:05.159137 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:38:05.159144 | orchestrator | 2025-06-02 17:38:05.159150 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-06-02 17:38:05.159156 | orchestrator | Monday 02 June 2025 17:36:09 +0000 (0:00:00.424) 0:00:27.668 *********** 2025-06-02 17:38:05.159168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 17:38:05.159173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 17:38:05.159183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 17:38:05.159191 | orchestrator | 2025-06-02 17:38:05.159195 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-06-02 17:38:05.159199 | orchestrator | Monday 02 June 2025 17:36:11 +0000 (0:00:01.995) 0:00:29.663 *********** 2025-06-02 17:38:05.159203 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:38:05.159207 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:38:05.159210 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:38:05.159214 | orchestrator | 2025-06-02 17:38:05.159218 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-06-02 17:38:05.159222 | orchestrator | Monday 02 June 2025 17:36:12 +0000 (0:00:00.911) 0:00:30.575 *********** 2025-06-02 17:38:05.159225 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:38:05.159229 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:38:05.159233 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:38:05.159236 | orchestrator | 2025-06-02 17:38:05.159240 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-06-02 17:38:05.159244 | orchestrator | Monday 02 June 2025 17:36:21 +0000 (0:00:08.946) 0:00:39.521 *********** 2025-06-02 17:38:05.159248 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:38:05.159251 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:38:05.159255 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:38:05.159259 | orchestrator | 2025-06-02 17:38:05.159263 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-02 17:38:05.159267 | orchestrator | 2025-06-02 17:38:05.159273 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-02 17:38:05.159277 | orchestrator | Monday 02 June 2025 17:36:22 +0000 (0:00:00.334) 0:00:39.855 *********** 2025-06-02 17:38:05.159281 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:38:05.159286 | orchestrator | 2025-06-02 17:38:05.159290 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-02 17:38:05.159293 | orchestrator | Monday 02 June 2025 17:36:22 +0000 (0:00:00.682) 0:00:40.538 *********** 2025-06-02 17:38:05.159297 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:05.159301 | orchestrator | 2025-06-02 17:38:05.159305 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-02 17:38:05.159308 | orchestrator | Monday 02 June 2025 17:36:22 +0000 (0:00:00.269) 0:00:40.808 *********** 2025-06-02 17:38:05.159312 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:38:05.159316 | orchestrator | 2025-06-02 17:38:05.159319 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-02 17:38:05.159323 | orchestrator | Monday 02 June 2025 17:36:24 +0000 (0:00:01.772) 0:00:42.581 *********** 2025-06-02 17:38:05.159327 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:38:05.159333 | orchestrator | 2025-06-02 17:38:05.159339 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-02 17:38:05.159345 | orchestrator | 2025-06-02 17:38:05.159351 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-02 17:38:05.159357 | orchestrator | Monday 02 June 2025 17:37:19 +0000 (0:00:54.508) 0:01:37.089 *********** 2025-06-02 17:38:05.159362 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:38:05.159367 | orchestrator | 2025-06-02 17:38:05.159373 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-02 17:38:05.159378 | orchestrator | Monday 02 June 2025 17:37:19 +0000 (0:00:00.637) 0:01:37.727 *********** 2025-06-02 17:38:05.159384 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:38:05.159394 | orchestrator | 2025-06-02 17:38:05.159400 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-02 17:38:05.159406 | orchestrator | Monday 02 June 2025 17:37:20 +0000 (0:00:00.437) 0:01:38.165 *********** 2025-06-02 17:38:05.159412 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:38:05.159418 | orchestrator | 2025-06-02 17:38:05.159424 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-02 17:38:05.159430 | orchestrator | Monday 02 June 2025 17:37:22 +0000 (0:00:02.064) 0:01:40.229 *********** 2025-06-02 17:38:05.159436 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:38:05.159442 | orchestrator | 2025-06-02 17:38:05.159447 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-02 17:38:05.159453 | orchestrator | 2025-06-02 17:38:05.159459 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-02 17:38:05.159464 | orchestrator | Monday 02 June 2025 17:37:39 +0000 (0:00:16.632) 0:01:56.862 *********** 2025-06-02 17:38:05.159470 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:38:05.159475 | orchestrator | 2025-06-02 17:38:05.159481 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-02 17:38:05.159487 | orchestrator | Monday 02 June 2025 17:37:39 +0000 (0:00:00.593) 0:01:57.455 *********** 2025-06-02 17:38:05.159493 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:38:05.159499 | orchestrator | 2025-06-02 17:38:05.159504 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-02 17:38:05.159516 | orchestrator | Monday 02 June 2025 17:37:39 +0000 (0:00:00.293) 0:01:57.749 *********** 2025-06-02 17:38:05.159522 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:38:05.159528 | orchestrator | 2025-06-02 17:38:05.159533 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-02 17:38:05.159539 | orchestrator | Monday 02 June 2025 17:37:46 +0000 (0:00:07.010) 0:02:04.760 *********** 2025-06-02 17:38:05.159565 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:38:05.159571 | orchestrator | 2025-06-02 17:38:05.159577 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-06-02 17:38:05.159583 | orchestrator | 2025-06-02 17:38:05.159589 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-06-02 17:38:05.159595 | orchestrator | Monday 02 June 2025 17:37:57 +0000 (0:00:10.802) 0:02:15.563 *********** 2025-06-02 17:38:05.159602 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:38:05.159608 | orchestrator | 2025-06-02 17:38:05.159614 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-06-02 17:38:05.159620 | orchestrator | Monday 02 June 2025 17:37:59 +0000 (0:00:01.509) 0:02:17.072 *********** 2025-06-02 17:38:05.159626 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-02 17:38:05.159631 | orchestrator | enable_outward_rabbitmq_True 2025-06-02 17:38:05.159637 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-02 17:38:05.159643 | orchestrator | outward_rabbitmq_restart 2025-06-02 17:38:05.159649 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:38:05.159655 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:38:05.159660 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:38:05.159666 | orchestrator | 2025-06-02 17:38:05.159672 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-06-02 17:38:05.159677 | orchestrator | skipping: no hosts matched 2025-06-02 17:38:05.159683 | orchestrator | 2025-06-02 17:38:05.159689 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-06-02 17:38:05.159694 | orchestrator | skipping: no hosts matched 2025-06-02 17:38:05.159699 | orchestrator | 2025-06-02 17:38:05.159705 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-06-02 17:38:05.159710 | orchestrator | skipping: no hosts matched 2025-06-02 17:38:05.159715 | orchestrator | 2025-06-02 17:38:05.159721 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:38:05.159734 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-06-02 17:38:05.159741 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-02 17:38:05.159751 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:38:05.159758 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:38:05.159764 | orchestrator | 2025-06-02 17:38:05.159770 | orchestrator | 2025-06-02 17:38:05.159776 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:38:05.159782 | orchestrator | Monday 02 June 2025 17:38:02 +0000 (0:00:03.194) 0:02:20.266 *********** 2025-06-02 17:38:05.159788 | orchestrator | =============================================================================== 2025-06-02 17:38:05.159793 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 81.94s 2025-06-02 17:38:05.159800 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.85s 2025-06-02 17:38:05.159805 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 8.95s 2025-06-02 17:38:05.159811 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 3.19s 2025-06-02 17:38:05.159817 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.15s 2025-06-02 17:38:05.159823 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 3.00s 2025-06-02 17:38:05.159829 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.18s 2025-06-02 17:38:05.159835 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.08s 2025-06-02 17:38:05.159841 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 2.00s 2025-06-02 17:38:05.159847 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.91s 2025-06-02 17:38:05.159854 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.83s 2025-06-02 17:38:05.159861 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.81s 2025-06-02 17:38:05.159868 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.61s 2025-06-02 17:38:05.159874 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.56s 2025-06-02 17:38:05.159881 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.53s 2025-06-02 17:38:05.159886 | orchestrator | Include rabbitmq post-deploy.yml ---------------------------------------- 1.51s 2025-06-02 17:38:05.159893 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.42s 2025-06-02 17:38:05.159899 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 1.00s 2025-06-02 17:38:05.159905 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.00s 2025-06-02 17:38:05.159912 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.99s 2025-06-02 17:38:05.159918 | orchestrator | 2025-06-02 17:38:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:38:08.182759 | orchestrator | 2025-06-02 17:38:08 | INFO  | Task fa190b7b-2c39-4ed7-87cf-6f92aaf82790 is in state STARTED 2025-06-02 17:38:08.182840 | orchestrator | 2025-06-02 17:38:08 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:38:08.182847 | orchestrator | 2025-06-02 17:38:08 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:38:08.183703 | orchestrator | 2025-06-02 17:38:08 | INFO  | Task 72d7783d-ed9d-4918-87d7-add16d863f31 is in state SUCCESS 2025-06-02 17:38:08.185318 | orchestrator | 2025-06-02 17:38:08.185359 | orchestrator | 2025-06-02 17:38:08.185364 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-06-02 17:38:08.185369 | orchestrator | 2025-06-02 17:38:08.185374 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-06-02 17:38:08.185378 | orchestrator | Monday 02 June 2025 17:32:41 +0000 (0:00:00.216) 0:00:00.216 *********** 2025-06-02 17:38:08.185382 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:38:08.185388 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:38:08.185392 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:38:08.185395 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:38:08.185399 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:38:08.185403 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:38:08.185406 | orchestrator | 2025-06-02 17:38:08.185410 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-06-02 17:38:08.185414 | orchestrator | Monday 02 June 2025 17:32:42 +0000 (0:00:00.658) 0:00:00.875 *********** 2025-06-02 17:38:08.185418 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:38:08.185423 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:38:08.185427 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:38:08.185430 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:08.185434 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:38:08.185438 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:38:08.185441 | orchestrator | 2025-06-02 17:38:08.185445 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-06-02 17:38:08.185449 | orchestrator | Monday 02 June 2025 17:32:42 +0000 (0:00:00.781) 0:00:01.657 *********** 2025-06-02 17:38:08.185453 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:38:08.185456 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:38:08.185460 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:38:08.185464 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:08.185467 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:38:08.185472 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:38:08.185475 | orchestrator | 2025-06-02 17:38:08.185479 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-06-02 17:38:08.185488 | orchestrator | Monday 02 June 2025 17:32:43 +0000 (0:00:00.849) 0:00:02.506 *********** 2025-06-02 17:38:08.185492 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:38:08.185496 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:38:08.185500 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:38:08.185503 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:38:08.185507 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:38:08.185511 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:38:08.185514 | orchestrator | 2025-06-02 17:38:08.185518 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-06-02 17:38:08.185522 | orchestrator | Monday 02 June 2025 17:32:45 +0000 (0:00:02.042) 0:00:04.548 *********** 2025-06-02 17:38:08.185525 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:38:08.185529 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:38:08.185533 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:38:08.185536 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:38:08.185540 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:38:08.185544 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:38:08.185567 | orchestrator | 2025-06-02 17:38:08.185571 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-06-02 17:38:08.185575 | orchestrator | Monday 02 June 2025 17:32:47 +0000 (0:00:01.296) 0:00:05.845 *********** 2025-06-02 17:38:08.185579 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:38:08.185582 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:38:08.185586 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:38:08.185590 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:38:08.185594 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:38:08.185597 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:38:08.185601 | orchestrator | 2025-06-02 17:38:08.185605 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-06-02 17:38:08.185612 | orchestrator | Monday 02 June 2025 17:32:48 +0000 (0:00:00.943) 0:00:06.788 *********** 2025-06-02 17:38:08.185616 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:38:08.185619 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:38:08.185623 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:38:08.185627 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:08.185630 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:38:08.185634 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:38:08.185638 | orchestrator | 2025-06-02 17:38:08.185642 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-06-02 17:38:08.185645 | orchestrator | Monday 02 June 2025 17:32:48 +0000 (0:00:00.797) 0:00:07.586 *********** 2025-06-02 17:38:08.185649 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:38:08.185653 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:38:08.185657 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:38:08.185660 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:08.185664 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:38:08.185668 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:38:08.185672 | orchestrator | 2025-06-02 17:38:08.185675 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-06-02 17:38:08.185679 | orchestrator | Monday 02 June 2025 17:32:49 +0000 (0:00:00.555) 0:00:08.141 *********** 2025-06-02 17:38:08.185683 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 17:38:08.185687 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 17:38:08.185690 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:38:08.185694 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 17:38:08.185698 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 17:38:08.185702 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:38:08.185706 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 17:38:08.185709 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 17:38:08.185713 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:38:08.185717 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 17:38:08.185728 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 17:38:08.185732 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:08.185736 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 17:38:08.185740 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 17:38:08.185744 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:38:08.185747 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 17:38:08.185751 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 17:38:08.185755 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:38:08.185759 | orchestrator | 2025-06-02 17:38:08.185763 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-06-02 17:38:08.185766 | orchestrator | Monday 02 June 2025 17:32:50 +0000 (0:00:00.759) 0:00:08.901 *********** 2025-06-02 17:38:08.185770 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:38:08.185774 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:38:08.185778 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:38:08.185781 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:08.185785 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:38:08.185789 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:38:08.185793 | orchestrator | 2025-06-02 17:38:08.185797 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-06-02 17:38:08.185801 | orchestrator | Monday 02 June 2025 17:32:51 +0000 (0:00:01.634) 0:00:10.535 *********** 2025-06-02 17:38:08.185807 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:38:08.185811 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:38:08.185815 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:38:08.185818 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:38:08.185822 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:38:08.185826 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:38:08.185830 | orchestrator | 2025-06-02 17:38:08.185833 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-06-02 17:38:08.185839 | orchestrator | Monday 02 June 2025 17:32:52 +0000 (0:00:00.690) 0:00:11.226 *********** 2025-06-02 17:38:08.185843 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:38:08.185847 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:38:08.185851 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:38:08.185854 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:38:08.185858 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:38:08.185862 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:38:08.185866 | orchestrator | 2025-06-02 17:38:08.185870 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-06-02 17:38:08.185873 | orchestrator | Monday 02 June 2025 17:32:58 +0000 (0:00:05.979) 0:00:17.206 *********** 2025-06-02 17:38:08.185877 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:38:08.185881 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:38:08.185885 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:38:08.185888 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:08.185892 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:38:08.185896 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:38:08.185900 | orchestrator | 2025-06-02 17:38:08.185904 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-06-02 17:38:08.185907 | orchestrator | Monday 02 June 2025 17:32:59 +0000 (0:00:00.858) 0:00:18.065 *********** 2025-06-02 17:38:08.185911 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:38:08.185915 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:38:08.185919 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:38:08.185924 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:08.185928 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:38:08.185932 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:38:08.185937 | orchestrator | 2025-06-02 17:38:08.185941 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-06-02 17:38:08.185947 | orchestrator | Monday 02 June 2025 17:33:01 +0000 (0:00:02.229) 0:00:20.294 *********** 2025-06-02 17:38:08.185952 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:38:08.185956 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:38:08.185960 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:38:08.185965 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:08.185969 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:38:08.185973 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:38:08.185978 | orchestrator | 2025-06-02 17:38:08.185982 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-06-02 17:38:08.185987 | orchestrator | Monday 02 June 2025 17:33:02 +0000 (0:00:00.998) 0:00:21.293 *********** 2025-06-02 17:38:08.185991 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2025-06-02 17:38:08.185996 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2025-06-02 17:38:08.186008 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:38:08.186012 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2025-06-02 17:38:08.186062 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2025-06-02 17:38:08.186066 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:38:08.186070 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2025-06-02 17:38:08.186075 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2025-06-02 17:38:08.186079 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:38:08.186083 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2025-06-02 17:38:08.186091 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2025-06-02 17:38:08.186095 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:08.186100 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2025-06-02 17:38:08.186104 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2025-06-02 17:38:08.186109 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:38:08.186113 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2025-06-02 17:38:08.186118 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2025-06-02 17:38:08.186122 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:38:08.186127 | orchestrator | 2025-06-02 17:38:08.186132 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-06-02 17:38:08.186140 | orchestrator | Monday 02 June 2025 17:33:03 +0000 (0:00:01.374) 0:00:22.669 *********** 2025-06-02 17:38:08.186144 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:38:08.186149 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:38:08.186153 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:38:08.186157 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:08.186162 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:38:08.186166 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:38:08.186171 | orchestrator | 2025-06-02 17:38:08.186175 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-06-02 17:38:08.186179 | orchestrator | 2025-06-02 17:38:08.186184 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-06-02 17:38:08.186188 | orchestrator | Monday 02 June 2025 17:33:05 +0000 (0:00:01.635) 0:00:24.304 *********** 2025-06-02 17:38:08.186193 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:38:08.186197 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:38:08.186202 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:38:08.186206 | orchestrator | 2025-06-02 17:38:08.186210 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-06-02 17:38:08.186215 | orchestrator | Monday 02 June 2025 17:33:07 +0000 (0:00:01.606) 0:00:25.911 *********** 2025-06-02 17:38:08.186219 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:38:08.186223 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:38:08.186228 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:38:08.186232 | orchestrator | 2025-06-02 17:38:08.186236 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-06-02 17:38:08.186241 | orchestrator | Monday 02 June 2025 17:33:08 +0000 (0:00:01.340) 0:00:27.251 *********** 2025-06-02 17:38:08.186245 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:38:08.186250 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:38:08.186254 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:38:08.186258 | orchestrator | 2025-06-02 17:38:08.186263 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-06-02 17:38:08.186267 | orchestrator | Monday 02 June 2025 17:33:09 +0000 (0:00:01.158) 0:00:28.409 *********** 2025-06-02 17:38:08.186272 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:38:08.186276 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:38:08.186283 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:38:08.186287 | orchestrator | 2025-06-02 17:38:08.186290 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-06-02 17:38:08.186294 | orchestrator | Monday 02 June 2025 17:33:10 +0000 (0:00:00.919) 0:00:29.329 *********** 2025-06-02 17:38:08.186298 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:08.186302 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:38:08.186305 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:38:08.186309 | orchestrator | 2025-06-02 17:38:08.186313 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-06-02 17:38:08.186316 | orchestrator | Monday 02 June 2025 17:33:11 +0000 (0:00:00.414) 0:00:29.744 *********** 2025-06-02 17:38:08.186320 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:38:08.186324 | orchestrator | 2025-06-02 17:38:08.186330 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-06-02 17:38:08.186334 | orchestrator | Monday 02 June 2025 17:33:11 +0000 (0:00:00.728) 0:00:30.472 *********** 2025-06-02 17:38:08.186338 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:38:08.186341 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:38:08.186345 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:38:08.186349 | orchestrator | 2025-06-02 17:38:08.186352 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-06-02 17:38:08.186356 | orchestrator | Monday 02 June 2025 17:33:14 +0000 (0:00:03.120) 0:00:33.593 *********** 2025-06-02 17:38:08.186360 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:38:08.186364 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:38:08.186367 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:38:08.186371 | orchestrator | 2025-06-02 17:38:08.186375 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-06-02 17:38:08.186378 | orchestrator | Monday 02 June 2025 17:33:15 +0000 (0:00:00.852) 0:00:34.445 *********** 2025-06-02 17:38:08.186382 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:38:08.186386 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:38:08.186389 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:38:08.186393 | orchestrator | 2025-06-02 17:38:08.186397 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-06-02 17:38:08.186401 | orchestrator | Monday 02 June 2025 17:33:16 +0000 (0:00:00.994) 0:00:35.440 *********** 2025-06-02 17:38:08.186404 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:38:08.186408 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:38:08.186412 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:38:08.186415 | orchestrator | 2025-06-02 17:38:08.186419 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-06-02 17:38:08.186423 | orchestrator | Monday 02 June 2025 17:33:18 +0000 (0:00:02.264) 0:00:37.704 *********** 2025-06-02 17:38:08.186426 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:08.186430 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:38:08.186434 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:38:08.186437 | orchestrator | 2025-06-02 17:38:08.186441 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-06-02 17:38:08.186445 | orchestrator | Monday 02 June 2025 17:33:19 +0000 (0:00:00.305) 0:00:38.010 *********** 2025-06-02 17:38:08.186449 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:08.186452 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:38:08.186456 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:38:08.186460 | orchestrator | 2025-06-02 17:38:08.186463 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-06-02 17:38:08.186467 | orchestrator | Monday 02 June 2025 17:33:19 +0000 (0:00:00.349) 0:00:38.359 *********** 2025-06-02 17:38:08.186471 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:38:08.186474 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:38:08.186485 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:38:08.186489 | orchestrator | 2025-06-02 17:38:08.186492 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-06-02 17:38:08.186496 | orchestrator | Monday 02 June 2025 17:33:21 +0000 (0:00:01.738) 0:00:40.097 *********** 2025-06-02 17:38:08.186502 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-02 17:38:08.186506 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-02 17:38:08.186510 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-02 17:38:08.186514 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-02 17:38:08.186518 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-02 17:38:08.186525 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-02 17:38:08.186528 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-02 17:38:08.186532 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-02 17:38:08.186536 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-02 17:38:08.186542 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-02 17:38:08.186584 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-02 17:38:08.186588 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-02 17:38:08.186592 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:38:08.186596 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:38:08.186600 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:38:08.186603 | orchestrator | 2025-06-02 17:38:08.186607 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-06-02 17:38:08.186611 | orchestrator | Monday 02 June 2025 17:34:06 +0000 (0:00:45.497) 0:01:25.595 *********** 2025-06-02 17:38:08.186615 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:08.186619 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:38:08.186623 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:38:08.186626 | orchestrator | 2025-06-02 17:38:08.186630 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-06-02 17:38:08.186634 | orchestrator | Monday 02 June 2025 17:34:07 +0000 (0:00:00.323) 0:01:25.919 *********** 2025-06-02 17:38:08.186638 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:38:08.186642 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:38:08.186646 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:38:08.186649 | orchestrator | 2025-06-02 17:38:08.186653 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-06-02 17:38:08.186657 | orchestrator | Monday 02 June 2025 17:34:08 +0000 (0:00:01.024) 0:01:26.943 *********** 2025-06-02 17:38:08.186661 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:38:08.186664 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:38:08.186668 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:38:08.186672 | orchestrator | 2025-06-02 17:38:08.186676 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-06-02 17:38:08.186680 | orchestrator | Monday 02 June 2025 17:34:09 +0000 (0:00:01.294) 0:01:28.237 *********** 2025-06-02 17:38:08.186684 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:38:08.186687 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:38:08.186691 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:38:08.186695 | orchestrator | 2025-06-02 17:38:08.186699 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-06-02 17:38:08.186702 | orchestrator | Monday 02 June 2025 17:34:25 +0000 (0:00:15.491) 0:01:43.729 *********** 2025-06-02 17:38:08.186706 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:38:08.186710 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:38:08.186714 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:38:08.186717 | orchestrator | 2025-06-02 17:38:08.186721 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-06-02 17:38:08.186725 | orchestrator | Monday 02 June 2025 17:34:25 +0000 (0:00:00.893) 0:01:44.623 *********** 2025-06-02 17:38:08.186729 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:38:08.186736 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:38:08.186740 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:38:08.186743 | orchestrator | 2025-06-02 17:38:08.186747 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-06-02 17:38:08.186751 | orchestrator | Monday 02 June 2025 17:34:26 +0000 (0:00:00.852) 0:01:45.475 *********** 2025-06-02 17:38:08.186755 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:38:08.186759 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:38:08.186762 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:38:08.186766 | orchestrator | 2025-06-02 17:38:08.186770 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-06-02 17:38:08.186774 | orchestrator | Monday 02 June 2025 17:34:27 +0000 (0:00:00.828) 0:01:46.304 *********** 2025-06-02 17:38:08.186778 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:38:08.186782 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:38:08.186785 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:38:08.186789 | orchestrator | 2025-06-02 17:38:08.186793 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-06-02 17:38:08.186797 | orchestrator | Monday 02 June 2025 17:34:28 +0000 (0:00:01.068) 0:01:47.372 *********** 2025-06-02 17:38:08.186803 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:38:08.186807 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:38:08.186811 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:38:08.186815 | orchestrator | 2025-06-02 17:38:08.186818 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-06-02 17:38:08.186822 | orchestrator | Monday 02 June 2025 17:34:28 +0000 (0:00:00.282) 0:01:47.655 *********** 2025-06-02 17:38:08.186826 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:38:08.186830 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:38:08.186834 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:38:08.186837 | orchestrator | 2025-06-02 17:38:08.186841 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-06-02 17:38:08.186845 | orchestrator | Monday 02 June 2025 17:34:29 +0000 (0:00:00.616) 0:01:48.272 *********** 2025-06-02 17:38:08.186849 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:38:08.186853 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:38:08.186857 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:38:08.186860 | orchestrator | 2025-06-02 17:38:08.186864 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-06-02 17:38:08.186868 | orchestrator | Monday 02 June 2025 17:34:30 +0000 (0:00:00.657) 0:01:48.930 *********** 2025-06-02 17:38:08.186872 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:38:08.186876 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:38:08.186879 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:38:08.186883 | orchestrator | 2025-06-02 17:38:08.186887 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-06-02 17:38:08.186891 | orchestrator | Monday 02 June 2025 17:34:31 +0000 (0:00:01.407) 0:01:50.337 *********** 2025-06-02 17:38:08.186895 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:38:08.186898 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:38:08.186902 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:38:08.186906 | orchestrator | 2025-06-02 17:38:08.186910 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-06-02 17:38:08.186914 | orchestrator | Monday 02 June 2025 17:34:32 +0000 (0:00:01.021) 0:01:51.359 *********** 2025-06-02 17:38:08.186920 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:08.186924 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:38:08.186928 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:38:08.186931 | orchestrator | 2025-06-02 17:38:08.186935 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-06-02 17:38:08.186939 | orchestrator | Monday 02 June 2025 17:34:32 +0000 (0:00:00.322) 0:01:51.682 *********** 2025-06-02 17:38:08.186943 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:08.186947 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:38:08.186950 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:38:08.186958 | orchestrator | 2025-06-02 17:38:08.186962 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-06-02 17:38:08.186965 | orchestrator | Monday 02 June 2025 17:34:33 +0000 (0:00:00.297) 0:01:51.979 *********** 2025-06-02 17:38:08.186969 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:38:08.186973 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:38:08.186977 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:38:08.186981 | orchestrator | 2025-06-02 17:38:08.186984 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-06-02 17:38:08.186988 | orchestrator | Monday 02 June 2025 17:34:34 +0000 (0:00:00.985) 0:01:52.965 *********** 2025-06-02 17:38:08.186992 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:38:08.186996 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:38:08.186999 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:38:08.187003 | orchestrator | 2025-06-02 17:38:08.187007 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-06-02 17:38:08.187011 | orchestrator | Monday 02 June 2025 17:34:34 +0000 (0:00:00.618) 0:01:53.584 *********** 2025-06-02 17:38:08.187015 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-02 17:38:08.187018 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-02 17:38:08.187022 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-02 17:38:08.187026 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-02 17:38:08.187030 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-02 17:38:08.187034 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-02 17:38:08.187037 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-02 17:38:08.187041 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-02 17:38:08.187045 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-02 17:38:08.187049 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-02 17:38:08.187053 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-06-02 17:38:08.187057 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-02 17:38:08.187060 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-02 17:38:08.187064 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-06-02 17:38:08.187068 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-02 17:38:08.187072 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-02 17:38:08.187078 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-02 17:38:08.187082 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-02 17:38:08.187085 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-02 17:38:08.187089 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-02 17:38:08.187093 | orchestrator | 2025-06-02 17:38:08.187097 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-06-02 17:38:08.187100 | orchestrator | 2025-06-02 17:38:08.187104 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-06-02 17:38:08.187108 | orchestrator | Monday 02 June 2025 17:34:38 +0000 (0:00:03.294) 0:01:56.878 *********** 2025-06-02 17:38:08.187114 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:38:08.187118 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:38:08.187122 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:38:08.187126 | orchestrator | 2025-06-02 17:38:08.187129 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-06-02 17:38:08.187133 | orchestrator | Monday 02 June 2025 17:34:38 +0000 (0:00:00.609) 0:01:57.487 *********** 2025-06-02 17:38:08.187137 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:38:08.187141 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:38:08.187144 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:38:08.187148 | orchestrator | 2025-06-02 17:38:08.187152 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-06-02 17:38:08.187156 | orchestrator | Monday 02 June 2025 17:34:39 +0000 (0:00:00.659) 0:01:58.146 *********** 2025-06-02 17:38:08.187159 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:38:08.187163 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:38:08.187167 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:38:08.187170 | orchestrator | 2025-06-02 17:38:08.187174 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-06-02 17:38:08.187182 | orchestrator | Monday 02 June 2025 17:34:39 +0000 (0:00:00.371) 0:01:58.518 *********** 2025-06-02 17:38:08.187186 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:38:08.187190 | orchestrator | 2025-06-02 17:38:08.187194 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-06-02 17:38:08.187198 | orchestrator | Monday 02 June 2025 17:34:40 +0000 (0:00:00.680) 0:01:59.198 *********** 2025-06-02 17:38:08.187201 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:38:08.187205 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:38:08.187209 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:38:08.187213 | orchestrator | 2025-06-02 17:38:08.187216 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-06-02 17:38:08.187220 | orchestrator | Monday 02 June 2025 17:34:40 +0000 (0:00:00.309) 0:01:59.508 *********** 2025-06-02 17:38:08.187224 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:38:08.187228 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:38:08.187231 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:38:08.187242 | orchestrator | 2025-06-02 17:38:08.187247 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-06-02 17:38:08.187250 | orchestrator | Monday 02 June 2025 17:34:41 +0000 (0:00:00.310) 0:01:59.819 *********** 2025-06-02 17:38:08.187254 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:38:08.187258 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:38:08.187261 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:38:08.187265 | orchestrator | 2025-06-02 17:38:08.187269 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-06-02 17:38:08.187273 | orchestrator | Monday 02 June 2025 17:34:41 +0000 (0:00:00.294) 0:02:00.113 *********** 2025-06-02 17:38:08.187276 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:38:08.187280 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:38:08.187284 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:38:08.187288 | orchestrator | 2025-06-02 17:38:08.187299 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-06-02 17:38:08.187302 | orchestrator | Monday 02 June 2025 17:34:42 +0000 (0:00:01.423) 0:02:01.537 *********** 2025-06-02 17:38:08.187306 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:38:08.187310 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:38:08.187314 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:38:08.187318 | orchestrator | 2025-06-02 17:38:08.187321 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-06-02 17:38:08.187325 | orchestrator | 2025-06-02 17:38:08.187329 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-06-02 17:38:08.187332 | orchestrator | Monday 02 June 2025 17:34:53 +0000 (0:00:11.133) 0:02:12.671 *********** 2025-06-02 17:38:08.187340 | orchestrator | ok: [testbed-manager] 2025-06-02 17:38:08.187344 | orchestrator | 2025-06-02 17:38:08.187348 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-06-02 17:38:08.187352 | orchestrator | Monday 02 June 2025 17:34:54 +0000 (0:00:00.824) 0:02:13.495 *********** 2025-06-02 17:38:08.187355 | orchestrator | changed: [testbed-manager] 2025-06-02 17:38:08.187359 | orchestrator | 2025-06-02 17:38:08.187363 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-02 17:38:08.187367 | orchestrator | Monday 02 June 2025 17:34:55 +0000 (0:00:00.415) 0:02:13.911 *********** 2025-06-02 17:38:08.187370 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-02 17:38:08.187374 | orchestrator | 2025-06-02 17:38:08.187378 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-02 17:38:08.187396 | orchestrator | Monday 02 June 2025 17:34:56 +0000 (0:00:00.991) 0:02:14.902 *********** 2025-06-02 17:38:08.187400 | orchestrator | changed: [testbed-manager] 2025-06-02 17:38:08.187404 | orchestrator | 2025-06-02 17:38:08.187408 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-06-02 17:38:08.187411 | orchestrator | Monday 02 June 2025 17:34:57 +0000 (0:00:00.831) 0:02:15.734 *********** 2025-06-02 17:38:08.187415 | orchestrator | changed: [testbed-manager] 2025-06-02 17:38:08.187419 | orchestrator | 2025-06-02 17:38:08.187423 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-06-02 17:38:08.187429 | orchestrator | Monday 02 June 2025 17:34:57 +0000 (0:00:00.630) 0:02:16.365 *********** 2025-06-02 17:38:08.187433 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-02 17:38:08.187437 | orchestrator | 2025-06-02 17:38:08.187441 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-06-02 17:38:08.187445 | orchestrator | Monday 02 June 2025 17:34:59 +0000 (0:00:01.706) 0:02:18.072 *********** 2025-06-02 17:38:08.187448 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-02 17:38:08.187452 | orchestrator | 2025-06-02 17:38:08.187456 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-06-02 17:38:08.187460 | orchestrator | Monday 02 June 2025 17:35:00 +0000 (0:00:00.899) 0:02:18.971 *********** 2025-06-02 17:38:08.187463 | orchestrator | changed: [testbed-manager] 2025-06-02 17:38:08.187467 | orchestrator | 2025-06-02 17:38:08.187471 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-06-02 17:38:08.187475 | orchestrator | Monday 02 June 2025 17:35:00 +0000 (0:00:00.413) 0:02:19.384 *********** 2025-06-02 17:38:08.187478 | orchestrator | changed: [testbed-manager] 2025-06-02 17:38:08.187482 | orchestrator | 2025-06-02 17:38:08.187486 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-06-02 17:38:08.187490 | orchestrator | 2025-06-02 17:38:08.187493 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-06-02 17:38:08.187497 | orchestrator | Monday 02 June 2025 17:35:01 +0000 (0:00:00.475) 0:02:19.860 *********** 2025-06-02 17:38:08.187501 | orchestrator | ok: [testbed-manager] 2025-06-02 17:38:08.187505 | orchestrator | 2025-06-02 17:38:08.187508 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-06-02 17:38:08.187512 | orchestrator | Monday 02 June 2025 17:35:01 +0000 (0:00:00.159) 0:02:20.020 *********** 2025-06-02 17:38:08.187516 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-06-02 17:38:08.187520 | orchestrator | 2025-06-02 17:38:08.187524 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-06-02 17:38:08.187536 | orchestrator | Monday 02 June 2025 17:35:01 +0000 (0:00:00.476) 0:02:20.496 *********** 2025-06-02 17:38:08.187540 | orchestrator | ok: [testbed-manager] 2025-06-02 17:38:08.187544 | orchestrator | 2025-06-02 17:38:08.187581 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-06-02 17:38:08.187585 | orchestrator | Monday 02 June 2025 17:35:02 +0000 (0:00:00.834) 0:02:21.331 *********** 2025-06-02 17:38:08.187595 | orchestrator | ok: [testbed-manager] 2025-06-02 17:38:08.187599 | orchestrator | 2025-06-02 17:38:08.187602 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-06-02 17:38:08.187606 | orchestrator | Monday 02 June 2025 17:35:04 +0000 (0:00:01.675) 0:02:23.006 *********** 2025-06-02 17:38:08.187610 | orchestrator | changed: [testbed-manager] 2025-06-02 17:38:08.187614 | orchestrator | 2025-06-02 17:38:08.187617 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-06-02 17:38:08.187621 | orchestrator | Monday 02 June 2025 17:35:05 +0000 (0:00:00.748) 0:02:23.754 *********** 2025-06-02 17:38:08.187625 | orchestrator | ok: [testbed-manager] 2025-06-02 17:38:08.187629 | orchestrator | 2025-06-02 17:38:08.187633 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-06-02 17:38:08.187636 | orchestrator | Monday 02 June 2025 17:35:05 +0000 (0:00:00.445) 0:02:24.199 *********** 2025-06-02 17:38:08.187640 | orchestrator | changed: [testbed-manager] 2025-06-02 17:38:08.187644 | orchestrator | 2025-06-02 17:38:08.187648 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-06-02 17:38:08.187651 | orchestrator | Monday 02 June 2025 17:35:12 +0000 (0:00:07.435) 0:02:31.635 *********** 2025-06-02 17:38:08.187655 | orchestrator | changed: [testbed-manager] 2025-06-02 17:38:08.187659 | orchestrator | 2025-06-02 17:38:08.187663 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-06-02 17:38:08.187666 | orchestrator | Monday 02 June 2025 17:35:25 +0000 (0:00:12.674) 0:02:44.309 *********** 2025-06-02 17:38:08.187670 | orchestrator | ok: [testbed-manager] 2025-06-02 17:38:08.187674 | orchestrator | 2025-06-02 17:38:08.187678 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-06-02 17:38:08.187681 | orchestrator | 2025-06-02 17:38:08.187685 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-06-02 17:38:08.187689 | orchestrator | Monday 02 June 2025 17:35:26 +0000 (0:00:00.631) 0:02:44.941 *********** 2025-06-02 17:38:08.187693 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:38:08.187697 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:38:08.187700 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:38:08.187704 | orchestrator | 2025-06-02 17:38:08.187708 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-06-02 17:38:08.187712 | orchestrator | Monday 02 June 2025 17:35:26 +0000 (0:00:00.611) 0:02:45.552 *********** 2025-06-02 17:38:08.187716 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:08.187719 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:38:08.187723 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:38:08.187727 | orchestrator | 2025-06-02 17:38:08.187731 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-06-02 17:38:08.187735 | orchestrator | Monday 02 June 2025 17:35:27 +0000 (0:00:00.354) 0:02:45.907 *********** 2025-06-02 17:38:08.187738 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:38:08.187742 | orchestrator | 2025-06-02 17:38:08.187746 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-06-02 17:38:08.187750 | orchestrator | Monday 02 June 2025 17:35:27 +0000 (0:00:00.581) 0:02:46.488 *********** 2025-06-02 17:38:08.187753 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-02 17:38:08.187757 | orchestrator | 2025-06-02 17:38:08.187761 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-06-02 17:38:08.187765 | orchestrator | Monday 02 June 2025 17:35:29 +0000 (0:00:01.477) 0:02:47.966 *********** 2025-06-02 17:38:08.187769 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 17:38:08.187773 | orchestrator | 2025-06-02 17:38:08.187776 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-06-02 17:38:08.187783 | orchestrator | Monday 02 June 2025 17:35:30 +0000 (0:00:01.050) 0:02:49.016 *********** 2025-06-02 17:38:08.187787 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:08.187790 | orchestrator | 2025-06-02 17:38:08.187794 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-06-02 17:38:08.187801 | orchestrator | Monday 02 June 2025 17:35:30 +0000 (0:00:00.254) 0:02:49.270 *********** 2025-06-02 17:38:08.187805 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 17:38:08.187809 | orchestrator | 2025-06-02 17:38:08.187813 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-06-02 17:38:08.187816 | orchestrator | Monday 02 June 2025 17:35:32 +0000 (0:00:01.532) 0:02:50.803 *********** 2025-06-02 17:38:08.187820 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:08.187824 | orchestrator | 2025-06-02 17:38:08.187828 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-06-02 17:38:08.187831 | orchestrator | Monday 02 June 2025 17:35:32 +0000 (0:00:00.267) 0:02:51.071 *********** 2025-06-02 17:38:08.187835 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:08.187839 | orchestrator | 2025-06-02 17:38:08.187843 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-06-02 17:38:08.187846 | orchestrator | Monday 02 June 2025 17:35:32 +0000 (0:00:00.235) 0:02:51.307 *********** 2025-06-02 17:38:08.187850 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:08.187854 | orchestrator | 2025-06-02 17:38:08.187857 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-06-02 17:38:08.187861 | orchestrator | Monday 02 June 2025 17:35:32 +0000 (0:00:00.235) 0:02:51.542 *********** 2025-06-02 17:38:08.187865 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:08.187869 | orchestrator | 2025-06-02 17:38:08.187873 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-06-02 17:38:08.187876 | orchestrator | Monday 02 June 2025 17:35:33 +0000 (0:00:00.246) 0:02:51.789 *********** 2025-06-02 17:38:08.187880 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-02 17:38:08.187884 | orchestrator | 2025-06-02 17:38:08.187890 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-06-02 17:38:08.187894 | orchestrator | Monday 02 June 2025 17:35:38 +0000 (0:00:05.141) 0:02:56.931 *********** 2025-06-02 17:38:08.187898 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2025-06-02 17:38:08.187902 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2025-06-02 17:38:08.187905 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (29 retries left). 2025-06-02 17:38:08.187909 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (28 retries left). 2025-06-02 17:38:08.187913 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2025-06-02 17:38:08.187917 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2025-06-02 17:38:08.187921 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2025-06-02 17:38:08.187924 | orchestrator | 2025-06-02 17:38:08.187928 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-06-02 17:38:08.187933 | orchestrator | Monday 02 June 2025 17:37:37 +0000 (0:01:59.264) 0:04:56.196 *********** 2025-06-02 17:38:08.187939 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 17:38:08.187944 | orchestrator | 2025-06-02 17:38:08.187950 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-06-02 17:38:08.187956 | orchestrator | Monday 02 June 2025 17:37:38 +0000 (0:00:01.353) 0:04:57.549 *********** 2025-06-02 17:38:08.187961 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-02 17:38:08.187967 | orchestrator | 2025-06-02 17:38:08.187974 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-06-02 17:38:08.187978 | orchestrator | Monday 02 June 2025 17:37:40 +0000 (0:00:01.741) 0:04:59.291 *********** 2025-06-02 17:38:08.187982 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-02 17:38:08.187985 | orchestrator | 2025-06-02 17:38:08.187989 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-06-02 17:38:08.187993 | orchestrator | Monday 02 June 2025 17:37:42 +0000 (0:00:01.808) 0:05:01.100 *********** 2025-06-02 17:38:08.188000 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:08.188004 | orchestrator | 2025-06-02 17:38:08.188007 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-06-02 17:38:08.188011 | orchestrator | Monday 02 June 2025 17:37:42 +0000 (0:00:00.240) 0:05:01.341 *********** 2025-06-02 17:38:08.188015 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2025-06-02 17:38:08.188019 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2025-06-02 17:38:08.188023 | orchestrator | 2025-06-02 17:38:08.188026 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-06-02 17:38:08.188030 | orchestrator | Monday 02 June 2025 17:37:44 +0000 (0:00:02.292) 0:05:03.634 *********** 2025-06-02 17:38:08.188034 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:08.188038 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:38:08.188042 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:38:08.188045 | orchestrator | 2025-06-02 17:38:08.188049 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-06-02 17:38:08.188053 | orchestrator | Monday 02 June 2025 17:37:45 +0000 (0:00:00.308) 0:05:03.942 *********** 2025-06-02 17:38:08.188056 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:38:08.188060 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:38:08.188064 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:38:08.188068 | orchestrator | 2025-06-02 17:38:08.188071 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-06-02 17:38:08.188075 | orchestrator | 2025-06-02 17:38:08.188079 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-06-02 17:38:08.188083 | orchestrator | Monday 02 June 2025 17:37:46 +0000 (0:00:00.941) 0:05:04.884 *********** 2025-06-02 17:38:08.188089 | orchestrator | ok: [testbed-manager] 2025-06-02 17:38:08.188093 | orchestrator | 2025-06-02 17:38:08.188097 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-06-02 17:38:08.188101 | orchestrator | Monday 02 June 2025 17:37:46 +0000 (0:00:00.387) 0:05:05.272 *********** 2025-06-02 17:38:08.188105 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-06-02 17:38:08.188108 | orchestrator | 2025-06-02 17:38:08.188112 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-06-02 17:38:08.188116 | orchestrator | Monday 02 June 2025 17:37:46 +0000 (0:00:00.249) 0:05:05.521 *********** 2025-06-02 17:38:08.188120 | orchestrator | changed: [testbed-manager] 2025-06-02 17:38:08.188123 | orchestrator | 2025-06-02 17:38:08.188127 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-06-02 17:38:08.188131 | orchestrator | 2025-06-02 17:38:08.188135 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-06-02 17:38:08.188138 | orchestrator | Monday 02 June 2025 17:37:52 +0000 (0:00:05.924) 0:05:11.446 *********** 2025-06-02 17:38:08.188142 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:38:08.188146 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:38:08.188150 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:38:08.188153 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:38:08.188157 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:38:08.188161 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:38:08.188172 | orchestrator | 2025-06-02 17:38:08.188176 | orchestrator | TASK [Manage labels] *********************************************************** 2025-06-02 17:38:08.188180 | orchestrator | Monday 02 June 2025 17:37:53 +0000 (0:00:00.670) 0:05:12.116 *********** 2025-06-02 17:38:08.188184 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-02 17:38:08.188187 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-02 17:38:08.188192 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-02 17:38:08.188195 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-02 17:38:08.188202 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-02 17:38:08.188206 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-02 17:38:08.188210 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-02 17:38:08.188213 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-02 17:38:08.188266 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-02 17:38:08.188280 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-02 17:38:08.188284 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-02 17:38:08.188288 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-02 17:38:08.188291 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-02 17:38:08.188295 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-02 17:38:08.188299 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-02 17:38:08.188303 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-02 17:38:08.188306 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-02 17:38:08.188310 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-02 17:38:08.188314 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-02 17:38:08.188317 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-02 17:38:08.188321 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-02 17:38:08.188325 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-02 17:38:08.188328 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-02 17:38:08.188332 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-02 17:38:08.188336 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-02 17:38:08.188340 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-02 17:38:08.188344 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-02 17:38:08.188348 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-02 17:38:08.188351 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-02 17:38:08.188361 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-02 17:38:08.188365 | orchestrator | 2025-06-02 17:38:08.188369 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-06-02 17:38:08.188372 | orchestrator | Monday 02 June 2025 17:38:06 +0000 (0:00:13.143) 0:05:25.260 *********** 2025-06-02 17:38:08.188376 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:38:08.188380 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:38:08.188388 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:38:08.188392 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:08.188395 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:38:08.188399 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:38:08.188403 | orchestrator | 2025-06-02 17:38:08.188406 | orchestrator | TASK [Manage taints] *********************************************************** 2025-06-02 17:38:08.188410 | orchestrator | Monday 02 June 2025 17:38:07 +0000 (0:00:00.471) 0:05:25.731 *********** 2025-06-02 17:38:08.188418 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:38:08.188421 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:38:08.188425 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:38:08.188429 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:08.188433 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:38:08.188436 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:38:08.188440 | orchestrator | 2025-06-02 17:38:08.188444 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:38:08.188448 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:38:08.188453 | orchestrator | testbed-node-0 : ok=46  changed=21  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-06-02 17:38:08.188457 | orchestrator | testbed-node-1 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-06-02 17:38:08.188461 | orchestrator | testbed-node-2 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-06-02 17:38:08.188467 | orchestrator | testbed-node-3 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-02 17:38:08.188471 | orchestrator | testbed-node-4 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-02 17:38:08.188475 | orchestrator | testbed-node-5 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-02 17:38:08.188478 | orchestrator | 2025-06-02 17:38:08.188482 | orchestrator | 2025-06-02 17:38:08.188486 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:38:08.188490 | orchestrator | Monday 02 June 2025 17:38:07 +0000 (0:00:00.675) 0:05:26.406 *********** 2025-06-02 17:38:08.188494 | orchestrator | =============================================================================== 2025-06-02 17:38:08.188497 | orchestrator | k3s_server_post : Wait for Cilium resources --------------------------- 119.27s 2025-06-02 17:38:08.188501 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 45.50s 2025-06-02 17:38:08.188505 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 15.49s 2025-06-02 17:38:08.188509 | orchestrator | Manage labels ---------------------------------------------------------- 13.14s 2025-06-02 17:38:08.188512 | orchestrator | kubectl : Install required packages ------------------------------------ 12.67s 2025-06-02 17:38:08.188516 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 11.13s 2025-06-02 17:38:08.188520 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.44s 2025-06-02 17:38:08.188523 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.98s 2025-06-02 17:38:08.188527 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.92s 2025-06-02 17:38:08.188531 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.14s 2025-06-02 17:38:08.188535 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.29s 2025-06-02 17:38:08.188539 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 3.12s 2025-06-02 17:38:08.188542 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.29s 2025-06-02 17:38:08.188579 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.26s 2025-06-02 17:38:08.188584 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.23s 2025-06-02 17:38:08.188587 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.04s 2025-06-02 17:38:08.188594 | orchestrator | k3s_server_post : Apply BGP manifests ----------------------------------- 1.81s 2025-06-02 17:38:08.188598 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.74s 2025-06-02 17:38:08.188602 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 1.74s 2025-06-02 17:38:08.188606 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.71s 2025-06-02 17:38:08.188609 | orchestrator | 2025-06-02 17:38:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:38:11.246712 | orchestrator | 2025-06-02 17:38:11 | INFO  | Task fa190b7b-2c39-4ed7-87cf-6f92aaf82790 is in state STARTED 2025-06-02 17:38:11.246949 | orchestrator | 2025-06-02 17:38:11 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:38:11.246966 | orchestrator | 2025-06-02 17:38:11 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:38:11.246977 | orchestrator | 2025-06-02 17:38:11 | INFO  | Task 7d0dbe26-efaf-484f-9901-058744dbf147 is in state STARTED 2025-06-02 17:38:11.246987 | orchestrator | 2025-06-02 17:38:11 | INFO  | Task 2ac1bd8e-9b31-464e-93ce-0b45be5c4425 is in state STARTED 2025-06-02 17:38:11.246998 | orchestrator | 2025-06-02 17:38:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:38:14.279871 | orchestrator | 2025-06-02 17:38:14 | INFO  | Task fa190b7b-2c39-4ed7-87cf-6f92aaf82790 is in state STARTED 2025-06-02 17:38:14.280139 | orchestrator | 2025-06-02 17:38:14 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:38:14.280159 | orchestrator | 2025-06-02 17:38:14 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:38:14.280192 | orchestrator | 2025-06-02 17:38:14 | INFO  | Task 7d0dbe26-efaf-484f-9901-058744dbf147 is in state STARTED 2025-06-02 17:38:14.280208 | orchestrator | 2025-06-02 17:38:14 | INFO  | Task 2ac1bd8e-9b31-464e-93ce-0b45be5c4425 is in state STARTED 2025-06-02 17:38:14.280222 | orchestrator | 2025-06-02 17:38:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:38:17.340859 | orchestrator | 2025-06-02 17:38:17 | INFO  | Task fa190b7b-2c39-4ed7-87cf-6f92aaf82790 is in state STARTED 2025-06-02 17:38:17.341774 | orchestrator | 2025-06-02 17:38:17 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:38:17.342412 | orchestrator | 2025-06-02 17:38:17 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:38:17.342987 | orchestrator | 2025-06-02 17:38:17 | INFO  | Task 7d0dbe26-efaf-484f-9901-058744dbf147 is in state SUCCESS 2025-06-02 17:38:17.343864 | orchestrator | 2025-06-02 17:38:17 | INFO  | Task 2ac1bd8e-9b31-464e-93ce-0b45be5c4425 is in state STARTED 2025-06-02 17:38:17.343906 | orchestrator | 2025-06-02 17:38:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:38:20.392184 | orchestrator | 2025-06-02 17:38:20 | INFO  | Task fa190b7b-2c39-4ed7-87cf-6f92aaf82790 is in state STARTED 2025-06-02 17:38:20.394137 | orchestrator | 2025-06-02 17:38:20 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:38:20.395944 | orchestrator | 2025-06-02 17:38:20 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:38:20.397396 | orchestrator | 2025-06-02 17:38:20 | INFO  | Task 2ac1bd8e-9b31-464e-93ce-0b45be5c4425 is in state SUCCESS 2025-06-02 17:38:20.397477 | orchestrator | 2025-06-02 17:38:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:38:23.457642 | orchestrator | 2025-06-02 17:38:23 | INFO  | Task fa190b7b-2c39-4ed7-87cf-6f92aaf82790 is in state STARTED 2025-06-02 17:38:23.457733 | orchestrator | 2025-06-02 17:38:23 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:38:23.457758 | orchestrator | 2025-06-02 17:38:23 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:38:23.457763 | orchestrator | 2025-06-02 17:38:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:38:26.499722 | orchestrator | 2025-06-02 17:38:26 | INFO  | Task fa190b7b-2c39-4ed7-87cf-6f92aaf82790 is in state STARTED 2025-06-02 17:38:26.499881 | orchestrator | 2025-06-02 17:38:26 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:38:26.501331 | orchestrator | 2025-06-02 17:38:26 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:38:26.501360 | orchestrator | 2025-06-02 17:38:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:38:29.557239 | orchestrator | 2025-06-02 17:38:29 | INFO  | Task fa190b7b-2c39-4ed7-87cf-6f92aaf82790 is in state STARTED 2025-06-02 17:38:29.557978 | orchestrator | 2025-06-02 17:38:29 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:38:29.560240 | orchestrator | 2025-06-02 17:38:29 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:38:29.560499 | orchestrator | 2025-06-02 17:38:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:38:32.609049 | orchestrator | 2025-06-02 17:38:32 | INFO  | Task fa190b7b-2c39-4ed7-87cf-6f92aaf82790 is in state STARTED 2025-06-02 17:38:32.611170 | orchestrator | 2025-06-02 17:38:32 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:38:32.613059 | orchestrator | 2025-06-02 17:38:32 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:38:32.613361 | orchestrator | 2025-06-02 17:38:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:38:35.649785 | orchestrator | 2025-06-02 17:38:35 | INFO  | Task fa190b7b-2c39-4ed7-87cf-6f92aaf82790 is in state STARTED 2025-06-02 17:38:35.652050 | orchestrator | 2025-06-02 17:38:35 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:38:35.653923 | orchestrator | 2025-06-02 17:38:35 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:38:35.653982 | orchestrator | 2025-06-02 17:38:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:38:38.706339 | orchestrator | 2025-06-02 17:38:38 | INFO  | Task fa190b7b-2c39-4ed7-87cf-6f92aaf82790 is in state STARTED 2025-06-02 17:38:38.706869 | orchestrator | 2025-06-02 17:38:38 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:38:38.707788 | orchestrator | 2025-06-02 17:38:38 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:38:38.707837 | orchestrator | 2025-06-02 17:38:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:38:41.761220 | orchestrator | 2025-06-02 17:38:41 | INFO  | Task fa190b7b-2c39-4ed7-87cf-6f92aaf82790 is in state STARTED 2025-06-02 17:38:41.761379 | orchestrator | 2025-06-02 17:38:41 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:38:41.763475 | orchestrator | 2025-06-02 17:38:41 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:38:41.764938 | orchestrator | 2025-06-02 17:38:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:38:44.807938 | orchestrator | 2025-06-02 17:38:44 | INFO  | Task fa190b7b-2c39-4ed7-87cf-6f92aaf82790 is in state STARTED 2025-06-02 17:38:44.808024 | orchestrator | 2025-06-02 17:38:44 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:38:44.809740 | orchestrator | 2025-06-02 17:38:44 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:38:44.810853 | orchestrator | 2025-06-02 17:38:44 | INFO  | Task 2c742f35-617b-4f09-b23f-a1b4344fde07 is in state STARTED 2025-06-02 17:38:44.810945 | orchestrator | 2025-06-02 17:38:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:38:47.847118 | orchestrator | 2025-06-02 17:38:47 | INFO  | Task fa190b7b-2c39-4ed7-87cf-6f92aaf82790 is in state STARTED 2025-06-02 17:38:47.847682 | orchestrator | 2025-06-02 17:38:47 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:38:47.848649 | orchestrator | 2025-06-02 17:38:47 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:38:47.851456 | orchestrator | 2025-06-02 17:38:47 | INFO  | Task 2c742f35-617b-4f09-b23f-a1b4344fde07 is in state STARTED 2025-06-02 17:38:47.851509 | orchestrator | 2025-06-02 17:38:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:38:50.905060 | orchestrator | 2025-06-02 17:38:50 | INFO  | Task fa190b7b-2c39-4ed7-87cf-6f92aaf82790 is in state STARTED 2025-06-02 17:38:50.905150 | orchestrator | 2025-06-02 17:38:50 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:38:50.905967 | orchestrator | 2025-06-02 17:38:50 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:38:50.906638 | orchestrator | 2025-06-02 17:38:50 | INFO  | Task 2c742f35-617b-4f09-b23f-a1b4344fde07 is in state STARTED 2025-06-02 17:38:50.906660 | orchestrator | 2025-06-02 17:38:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:38:53.958552 | orchestrator | 2025-06-02 17:38:53 | INFO  | Task fa190b7b-2c39-4ed7-87cf-6f92aaf82790 is in state STARTED 2025-06-02 17:38:53.961787 | orchestrator | 2025-06-02 17:38:53 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:38:53.964198 | orchestrator | 2025-06-02 17:38:53 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:38:53.966812 | orchestrator | 2025-06-02 17:38:53 | INFO  | Task 2c742f35-617b-4f09-b23f-a1b4344fde07 is in state STARTED 2025-06-02 17:38:53.966841 | orchestrator | 2025-06-02 17:38:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:38:57.016429 | orchestrator | 2025-06-02 17:38:57 | INFO  | Task fa190b7b-2c39-4ed7-87cf-6f92aaf82790 is in state STARTED 2025-06-02 17:38:57.016587 | orchestrator | 2025-06-02 17:38:57 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:38:57.016606 | orchestrator | 2025-06-02 17:38:57 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:38:57.016917 | orchestrator | 2025-06-02 17:38:57 | INFO  | Task 2c742f35-617b-4f09-b23f-a1b4344fde07 is in state STARTED 2025-06-02 17:38:57.016954 | orchestrator | 2025-06-02 17:38:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:39:00.077241 | orchestrator | 2025-06-02 17:39:00 | INFO  | Task fa190b7b-2c39-4ed7-87cf-6f92aaf82790 is in state STARTED 2025-06-02 17:39:00.077948 | orchestrator | 2025-06-02 17:39:00 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:39:00.082268 | orchestrator | 2025-06-02 17:39:00 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:39:00.084619 | orchestrator | 2025-06-02 17:39:00 | INFO  | Task 2c742f35-617b-4f09-b23f-a1b4344fde07 is in state STARTED 2025-06-02 17:39:00.084639 | orchestrator | 2025-06-02 17:39:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:39:03.143242 | orchestrator | 2025-06-02 17:39:03 | INFO  | Task fa190b7b-2c39-4ed7-87cf-6f92aaf82790 is in state STARTED 2025-06-02 17:39:03.144523 | orchestrator | 2025-06-02 17:39:03 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:39:03.147796 | orchestrator | 2025-06-02 17:39:03 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:39:03.149124 | orchestrator | 2025-06-02 17:39:03 | INFO  | Task 2c742f35-617b-4f09-b23f-a1b4344fde07 is in state SUCCESS 2025-06-02 17:39:03.149155 | orchestrator | 2025-06-02 17:39:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:39:06.214639 | orchestrator | 2025-06-02 17:39:06 | INFO  | Task fa190b7b-2c39-4ed7-87cf-6f92aaf82790 is in state STARTED 2025-06-02 17:39:06.216762 | orchestrator | 2025-06-02 17:39:06 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:39:06.218852 | orchestrator | 2025-06-02 17:39:06 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:39:06.219076 | orchestrator | 2025-06-02 17:39:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:39:09.252626 | orchestrator | 2025-06-02 17:39:09 | INFO  | Task fa190b7b-2c39-4ed7-87cf-6f92aaf82790 is in state STARTED 2025-06-02 17:39:09.253403 | orchestrator | 2025-06-02 17:39:09 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:39:09.255100 | orchestrator | 2025-06-02 17:39:09 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:39:09.255859 | orchestrator | 2025-06-02 17:39:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:39:12.306693 | orchestrator | 2025-06-02 17:39:12 | INFO  | Task fa190b7b-2c39-4ed7-87cf-6f92aaf82790 is in state STARTED 2025-06-02 17:39:12.306873 | orchestrator | 2025-06-02 17:39:12 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:39:12.310372 | orchestrator | 2025-06-02 17:39:12 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:39:12.311345 | orchestrator | 2025-06-02 17:39:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:39:15.367344 | orchestrator | 2025-06-02 17:39:15 | INFO  | Task fa190b7b-2c39-4ed7-87cf-6f92aaf82790 is in state SUCCESS 2025-06-02 17:39:15.369368 | orchestrator | 2025-06-02 17:39:15.369426 | orchestrator | 2025-06-02 17:39:15.369435 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-06-02 17:39:15.369443 | orchestrator | 2025-06-02 17:39:15.369450 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-02 17:39:15.369457 | orchestrator | Monday 02 June 2025 17:38:12 +0000 (0:00:00.182) 0:00:00.182 *********** 2025-06-02 17:39:15.369464 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-02 17:39:15.369471 | orchestrator | 2025-06-02 17:39:15.369478 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-02 17:39:15.369484 | orchestrator | Monday 02 June 2025 17:38:13 +0000 (0:00:00.733) 0:00:00.916 *********** 2025-06-02 17:39:15.369491 | orchestrator | changed: [testbed-manager] 2025-06-02 17:39:15.369530 | orchestrator | 2025-06-02 17:39:15.369536 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-06-02 17:39:15.369542 | orchestrator | Monday 02 June 2025 17:38:14 +0000 (0:00:00.951) 0:00:01.867 *********** 2025-06-02 17:39:15.369549 | orchestrator | changed: [testbed-manager] 2025-06-02 17:39:15.369556 | orchestrator | 2025-06-02 17:39:15.369562 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:39:15.369569 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:39:15.369576 | orchestrator | 2025-06-02 17:39:15.369582 | orchestrator | 2025-06-02 17:39:15.369588 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:39:15.369617 | orchestrator | Monday 02 June 2025 17:38:14 +0000 (0:00:00.392) 0:00:02.260 *********** 2025-06-02 17:39:15.369624 | orchestrator | =============================================================================== 2025-06-02 17:39:15.369631 | orchestrator | Write kubeconfig file --------------------------------------------------- 0.95s 2025-06-02 17:39:15.369638 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.73s 2025-06-02 17:39:15.369644 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.39s 2025-06-02 17:39:15.369650 | orchestrator | 2025-06-02 17:39:15.369656 | orchestrator | 2025-06-02 17:39:15.369663 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-06-02 17:39:15.369669 | orchestrator | 2025-06-02 17:39:15.369675 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-06-02 17:39:15.369683 | orchestrator | Monday 02 June 2025 17:38:12 +0000 (0:00:00.155) 0:00:00.155 *********** 2025-06-02 17:39:15.369688 | orchestrator | ok: [testbed-manager] 2025-06-02 17:39:15.369710 | orchestrator | 2025-06-02 17:39:15.369716 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-06-02 17:39:15.369722 | orchestrator | Monday 02 June 2025 17:38:12 +0000 (0:00:00.527) 0:00:00.683 *********** 2025-06-02 17:39:15.369728 | orchestrator | ok: [testbed-manager] 2025-06-02 17:39:15.369733 | orchestrator | 2025-06-02 17:39:15.369739 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-02 17:39:15.369745 | orchestrator | Monday 02 June 2025 17:38:13 +0000 (0:00:00.551) 0:00:01.235 *********** 2025-06-02 17:39:15.369752 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-02 17:39:15.369758 | orchestrator | 2025-06-02 17:39:15.369764 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-02 17:39:15.369770 | orchestrator | Monday 02 June 2025 17:38:13 +0000 (0:00:00.653) 0:00:01.888 *********** 2025-06-02 17:39:15.369777 | orchestrator | changed: [testbed-manager] 2025-06-02 17:39:15.369783 | orchestrator | 2025-06-02 17:39:15.369790 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-06-02 17:39:15.369796 | orchestrator | Monday 02 June 2025 17:38:14 +0000 (0:00:01.104) 0:00:02.993 *********** 2025-06-02 17:39:15.369802 | orchestrator | changed: [testbed-manager] 2025-06-02 17:39:15.369808 | orchestrator | 2025-06-02 17:39:15.369814 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-06-02 17:39:15.369834 | orchestrator | Monday 02 June 2025 17:38:15 +0000 (0:00:00.807) 0:00:03.801 *********** 2025-06-02 17:39:15.369841 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-02 17:39:15.369847 | orchestrator | 2025-06-02 17:39:15.369853 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-06-02 17:39:15.369859 | orchestrator | Monday 02 June 2025 17:38:17 +0000 (0:00:01.650) 0:00:05.451 *********** 2025-06-02 17:39:15.369865 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-02 17:39:15.369871 | orchestrator | 2025-06-02 17:39:15.369878 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-06-02 17:39:15.369885 | orchestrator | Monday 02 June 2025 17:38:18 +0000 (0:00:00.865) 0:00:06.317 *********** 2025-06-02 17:39:15.369891 | orchestrator | ok: [testbed-manager] 2025-06-02 17:39:15.369897 | orchestrator | 2025-06-02 17:39:15.369902 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-06-02 17:39:15.369908 | orchestrator | Monday 02 June 2025 17:38:18 +0000 (0:00:00.461) 0:00:06.778 *********** 2025-06-02 17:39:15.369914 | orchestrator | ok: [testbed-manager] 2025-06-02 17:39:15.369919 | orchestrator | 2025-06-02 17:39:15.369926 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:39:15.369933 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:39:15.369939 | orchestrator | 2025-06-02 17:39:15.369945 | orchestrator | 2025-06-02 17:39:15.369951 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:39:15.369968 | orchestrator | Monday 02 June 2025 17:38:19 +0000 (0:00:00.317) 0:00:07.096 *********** 2025-06-02 17:39:15.369974 | orchestrator | =============================================================================== 2025-06-02 17:39:15.369981 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.65s 2025-06-02 17:39:15.369987 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.10s 2025-06-02 17:39:15.369993 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.87s 2025-06-02 17:39:15.370170 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.81s 2025-06-02 17:39:15.370192 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.65s 2025-06-02 17:39:15.370199 | orchestrator | Create .kube directory -------------------------------------------------- 0.55s 2025-06-02 17:39:15.370206 | orchestrator | Get home directory of operator user ------------------------------------- 0.53s 2025-06-02 17:39:15.370212 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.46s 2025-06-02 17:39:15.370219 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.32s 2025-06-02 17:39:15.370225 | orchestrator | 2025-06-02 17:39:15.370233 | orchestrator | None 2025-06-02 17:39:15.370240 | orchestrator | 2025-06-02 17:39:15.370246 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 17:39:15.370253 | orchestrator | 2025-06-02 17:39:15.370259 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 17:39:15.370265 | orchestrator | Monday 02 June 2025 17:36:32 +0000 (0:00:00.161) 0:00:00.161 *********** 2025-06-02 17:39:15.370272 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:39:15.370279 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:39:15.370285 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:39:15.370292 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:39:15.370299 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:39:15.370305 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:39:15.370312 | orchestrator | 2025-06-02 17:39:15.370320 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 17:39:15.370328 | orchestrator | Monday 02 June 2025 17:36:33 +0000 (0:00:00.816) 0:00:00.977 *********** 2025-06-02 17:39:15.370335 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-06-02 17:39:15.370343 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-06-02 17:39:15.370350 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-06-02 17:39:15.370357 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-06-02 17:39:15.370365 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-06-02 17:39:15.370371 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-06-02 17:39:15.370377 | orchestrator | 2025-06-02 17:39:15.370384 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-06-02 17:39:15.370391 | orchestrator | 2025-06-02 17:39:15.370398 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-06-02 17:39:15.370405 | orchestrator | Monday 02 June 2025 17:36:33 +0000 (0:00:00.824) 0:00:01.802 *********** 2025-06-02 17:39:15.370432 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:39:15.370442 | orchestrator | 2025-06-02 17:39:15.370448 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-06-02 17:39:15.370455 | orchestrator | Monday 02 June 2025 17:36:35 +0000 (0:00:01.037) 0:00:02.839 *********** 2025-06-02 17:39:15.370465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.370517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.370527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.370535 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.370561 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.370569 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.370576 | orchestrator | 2025-06-02 17:39:15.370583 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-06-02 17:39:15.370590 | orchestrator | Monday 02 June 2025 17:36:36 +0000 (0:00:01.441) 0:00:04.280 *********** 2025-06-02 17:39:15.370596 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.370603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.370610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.370623 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.370645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.370653 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.370660 | orchestrator | 2025-06-02 17:39:15.370667 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-06-02 17:39:15.370674 | orchestrator | Monday 02 June 2025 17:36:38 +0000 (0:00:01.605) 0:00:05.886 *********** 2025-06-02 17:39:15.370681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.370695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.370703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.370710 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.370717 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.370724 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.370747 | orchestrator | 2025-06-02 17:39:15.370754 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-06-02 17:39:15.370761 | orchestrator | Monday 02 June 2025 17:36:39 +0000 (0:00:01.549) 0:00:07.435 *********** 2025-06-02 17:39:15.370771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.370777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.370784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.370791 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.370804 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.370812 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.370883 | orchestrator | 2025-06-02 17:39:15.370894 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-06-02 17:39:15.370901 | orchestrator | Monday 02 June 2025 17:36:42 +0000 (0:00:02.756) 0:00:10.192 *********** 2025-06-02 17:39:15.370909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.370922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.370930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.370941 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.370948 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.370955 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.370963 | orchestrator | 2025-06-02 17:39:15.370970 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-06-02 17:39:15.370977 | orchestrator | Monday 02 June 2025 17:36:43 +0000 (0:00:01.455) 0:00:11.647 *********** 2025-06-02 17:39:15.370983 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:39:15.370990 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:39:15.370996 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:39:15.371003 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:39:15.371008 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:39:15.371015 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:39:15.371021 | orchestrator | 2025-06-02 17:39:15.371032 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-06-02 17:39:15.371039 | orchestrator | Monday 02 June 2025 17:36:46 +0000 (0:00:02.540) 0:00:14.188 *********** 2025-06-02 17:39:15.371067 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-06-02 17:39:15.371076 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-06-02 17:39:15.371083 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-06-02 17:39:15.371090 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-06-02 17:39:15.371096 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-06-02 17:39:15.371103 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-06-02 17:39:15.371116 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-02 17:39:15.371124 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-02 17:39:15.371130 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-02 17:39:15.371137 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-02 17:39:15.371143 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-02 17:39:15.371150 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-02 17:39:15.371157 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-02 17:39:15.371165 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-02 17:39:15.371172 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-02 17:39:15.371179 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-02 17:39:15.371186 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-02 17:39:15.371193 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-02 17:39:15.371200 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-02 17:39:15.372600 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-02 17:39:15.372626 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-02 17:39:15.372633 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-02 17:39:15.372649 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-02 17:39:15.372656 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-02 17:39:15.372662 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-02 17:39:15.372668 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-02 17:39:15.372674 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-02 17:39:15.372686 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-02 17:39:15.372695 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-02 17:39:15.372702 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-02 17:39:15.372709 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-02 17:39:15.372716 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-02 17:39:15.372723 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-02 17:39:15.372729 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-02 17:39:15.372736 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-02 17:39:15.372742 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-02 17:39:15.372773 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-02 17:39:15.372791 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-02 17:39:15.372797 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-02 17:39:15.372804 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-02 17:39:15.372810 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-02 17:39:15.372816 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-02 17:39:15.372823 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-06-02 17:39:15.372831 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-06-02 17:39:15.372838 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-06-02 17:39:15.372844 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-06-02 17:39:15.372851 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-06-02 17:39:15.372857 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-06-02 17:39:15.372864 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-02 17:39:15.372871 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-02 17:39:15.372877 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-02 17:39:15.372884 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-02 17:39:15.372890 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-02 17:39:15.372896 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-02 17:39:15.372902 | orchestrator | 2025-06-02 17:39:15.372909 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-02 17:39:15.372915 | orchestrator | Monday 02 June 2025 17:37:05 +0000 (0:00:19.386) 0:00:33.574 *********** 2025-06-02 17:39:15.372921 | orchestrator | 2025-06-02 17:39:15.372928 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-02 17:39:15.372933 | orchestrator | Monday 02 June 2025 17:37:05 +0000 (0:00:00.064) 0:00:33.639 *********** 2025-06-02 17:39:15.372939 | orchestrator | 2025-06-02 17:39:15.372945 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-02 17:39:15.372962 | orchestrator | Monday 02 June 2025 17:37:05 +0000 (0:00:00.073) 0:00:33.712 *********** 2025-06-02 17:39:15.372968 | orchestrator | 2025-06-02 17:39:15.372974 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-02 17:39:15.372981 | orchestrator | Monday 02 June 2025 17:37:05 +0000 (0:00:00.064) 0:00:33.777 *********** 2025-06-02 17:39:15.372987 | orchestrator | 2025-06-02 17:39:15.372993 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-02 17:39:15.372999 | orchestrator | Monday 02 June 2025 17:37:06 +0000 (0:00:00.064) 0:00:33.842 *********** 2025-06-02 17:39:15.373010 | orchestrator | 2025-06-02 17:39:15.373016 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-02 17:39:15.373022 | orchestrator | Monday 02 June 2025 17:37:06 +0000 (0:00:00.072) 0:00:33.914 *********** 2025-06-02 17:39:15.373029 | orchestrator | 2025-06-02 17:39:15.373034 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-06-02 17:39:15.373040 | orchestrator | Monday 02 June 2025 17:37:06 +0000 (0:00:00.065) 0:00:33.980 *********** 2025-06-02 17:39:15.373047 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:39:15.373055 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:39:15.373061 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:39:15.373067 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:39:15.373073 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:39:15.373079 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:39:15.373085 | orchestrator | 2025-06-02 17:39:15.373091 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-06-02 17:39:15.373097 | orchestrator | Monday 02 June 2025 17:37:08 +0000 (0:00:01.920) 0:00:35.900 *********** 2025-06-02 17:39:15.373103 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:39:15.373110 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:39:15.373116 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:39:15.373123 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:39:15.373129 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:39:15.373135 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:39:15.373141 | orchestrator | 2025-06-02 17:39:15.373147 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-06-02 17:39:15.373153 | orchestrator | 2025-06-02 17:39:15.373159 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-02 17:39:15.373171 | orchestrator | Monday 02 June 2025 17:37:42 +0000 (0:00:34.573) 0:01:10.473 *********** 2025-06-02 17:39:15.373177 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:39:15.373184 | orchestrator | 2025-06-02 17:39:15.373190 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-02 17:39:15.373196 | orchestrator | Monday 02 June 2025 17:37:43 +0000 (0:00:00.653) 0:01:11.126 *********** 2025-06-02 17:39:15.373202 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:39:15.373209 | orchestrator | 2025-06-02 17:39:15.373214 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-06-02 17:39:15.373221 | orchestrator | Monday 02 June 2025 17:37:44 +0000 (0:00:00.763) 0:01:11.890 *********** 2025-06-02 17:39:15.373226 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:39:15.373233 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:39:15.373239 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:39:15.373245 | orchestrator | 2025-06-02 17:39:15.373251 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-06-02 17:39:15.373257 | orchestrator | Monday 02 June 2025 17:37:44 +0000 (0:00:00.811) 0:01:12.702 *********** 2025-06-02 17:39:15.373262 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:39:15.373269 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:39:15.373274 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:39:15.373280 | orchestrator | 2025-06-02 17:39:15.373286 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-06-02 17:39:15.373292 | orchestrator | Monday 02 June 2025 17:37:45 +0000 (0:00:00.451) 0:01:13.153 *********** 2025-06-02 17:39:15.373298 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:39:15.373304 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:39:15.373310 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:39:15.373316 | orchestrator | 2025-06-02 17:39:15.373323 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-06-02 17:39:15.373329 | orchestrator | Monday 02 June 2025 17:37:45 +0000 (0:00:00.331) 0:01:13.485 *********** 2025-06-02 17:39:15.373335 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:39:15.373363 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:39:15.373370 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:39:15.373377 | orchestrator | 2025-06-02 17:39:15.373384 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-06-02 17:39:15.373390 | orchestrator | Monday 02 June 2025 17:37:46 +0000 (0:00:00.546) 0:01:14.032 *********** 2025-06-02 17:39:15.373396 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:39:15.373403 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:39:15.373409 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:39:15.373415 | orchestrator | 2025-06-02 17:39:15.373422 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-06-02 17:39:15.373428 | orchestrator | Monday 02 June 2025 17:37:46 +0000 (0:00:00.360) 0:01:14.393 *********** 2025-06-02 17:39:15.373435 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:39:15.373442 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:39:15.373448 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:39:15.373455 | orchestrator | 2025-06-02 17:39:15.373461 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-06-02 17:39:15.373467 | orchestrator | Monday 02 June 2025 17:37:46 +0000 (0:00:00.329) 0:01:14.722 *********** 2025-06-02 17:39:15.373473 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:39:15.373481 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:39:15.373487 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:39:15.373546 | orchestrator | 2025-06-02 17:39:15.373554 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-06-02 17:39:15.373561 | orchestrator | Monday 02 June 2025 17:37:47 +0000 (0:00:00.351) 0:01:15.073 *********** 2025-06-02 17:39:15.373568 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:39:15.373574 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:39:15.373581 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:39:15.373587 | orchestrator | 2025-06-02 17:39:15.373600 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-06-02 17:39:15.373606 | orchestrator | Monday 02 June 2025 17:37:47 +0000 (0:00:00.638) 0:01:15.711 *********** 2025-06-02 17:39:15.373613 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:39:15.373619 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:39:15.373625 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:39:15.373631 | orchestrator | 2025-06-02 17:39:15.373638 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-06-02 17:39:15.373644 | orchestrator | Monday 02 June 2025 17:37:48 +0000 (0:00:00.335) 0:01:16.047 *********** 2025-06-02 17:39:15.373650 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:39:15.373656 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:39:15.373662 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:39:15.373669 | orchestrator | 2025-06-02 17:39:15.373676 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-06-02 17:39:15.373705 | orchestrator | Monday 02 June 2025 17:37:48 +0000 (0:00:00.297) 0:01:16.345 *********** 2025-06-02 17:39:15.373713 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:39:15.373720 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:39:15.373726 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:39:15.373733 | orchestrator | 2025-06-02 17:39:15.373739 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-06-02 17:39:15.373746 | orchestrator | Monday 02 June 2025 17:37:48 +0000 (0:00:00.397) 0:01:16.742 *********** 2025-06-02 17:39:15.373753 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:39:15.373759 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:39:15.373766 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:39:15.373773 | orchestrator | 2025-06-02 17:39:15.373779 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-06-02 17:39:15.373786 | orchestrator | Monday 02 June 2025 17:37:49 +0000 (0:00:00.565) 0:01:17.308 *********** 2025-06-02 17:39:15.373793 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:39:15.373800 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:39:15.373816 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:39:15.373822 | orchestrator | 2025-06-02 17:39:15.373829 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-06-02 17:39:15.377482 | orchestrator | Monday 02 June 2025 17:37:49 +0000 (0:00:00.316) 0:01:17.624 *********** 2025-06-02 17:39:15.377564 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:39:15.377575 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:39:15.377582 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:39:15.377588 | orchestrator | 2025-06-02 17:39:15.377597 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-06-02 17:39:15.377606 | orchestrator | Monday 02 June 2025 17:37:50 +0000 (0:00:00.303) 0:01:17.928 *********** 2025-06-02 17:39:15.377613 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:39:15.377620 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:39:15.377627 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:39:15.377633 | orchestrator | 2025-06-02 17:39:15.377640 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-06-02 17:39:15.377647 | orchestrator | Monday 02 June 2025 17:37:50 +0000 (0:00:00.362) 0:01:18.290 *********** 2025-06-02 17:39:15.377654 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:39:15.377661 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:39:15.377667 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:39:15.377674 | orchestrator | 2025-06-02 17:39:15.377681 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-06-02 17:39:15.377688 | orchestrator | Monday 02 June 2025 17:37:50 +0000 (0:00:00.495) 0:01:18.786 *********** 2025-06-02 17:39:15.377695 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:39:15.377702 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:39:15.377709 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:39:15.377715 | orchestrator | 2025-06-02 17:39:15.377722 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-02 17:39:15.377729 | orchestrator | Monday 02 June 2025 17:37:51 +0000 (0:00:00.447) 0:01:19.233 *********** 2025-06-02 17:39:15.377737 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:39:15.377744 | orchestrator | 2025-06-02 17:39:15.377751 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-06-02 17:39:15.377757 | orchestrator | Monday 02 June 2025 17:37:52 +0000 (0:00:00.855) 0:01:20.089 *********** 2025-06-02 17:39:15.377764 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:39:15.377772 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:39:15.377779 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:39:15.377786 | orchestrator | 2025-06-02 17:39:15.377792 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-06-02 17:39:15.377799 | orchestrator | Monday 02 June 2025 17:37:53 +0000 (0:00:01.553) 0:01:21.643 *********** 2025-06-02 17:39:15.377806 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:39:15.377813 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:39:15.377820 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:39:15.377827 | orchestrator | 2025-06-02 17:39:15.377833 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-06-02 17:39:15.377840 | orchestrator | Monday 02 June 2025 17:37:54 +0000 (0:00:01.149) 0:01:22.792 *********** 2025-06-02 17:39:15.377847 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:39:15.377854 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:39:15.377861 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:39:15.377868 | orchestrator | 2025-06-02 17:39:15.377875 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-06-02 17:39:15.377882 | orchestrator | Monday 02 June 2025 17:37:55 +0000 (0:00:00.635) 0:01:23.428 *********** 2025-06-02 17:39:15.377890 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:39:15.377897 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:39:15.377903 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:39:15.377910 | orchestrator | 2025-06-02 17:39:15.377930 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-06-02 17:39:15.377938 | orchestrator | Monday 02 June 2025 17:37:56 +0000 (0:00:00.483) 0:01:23.912 *********** 2025-06-02 17:39:15.377945 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:39:15.377951 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:39:15.377958 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:39:15.377966 | orchestrator | 2025-06-02 17:39:15.377979 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-06-02 17:39:15.377987 | orchestrator | Monday 02 June 2025 17:37:57 +0000 (0:00:01.226) 0:01:25.139 *********** 2025-06-02 17:39:15.377994 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:39:15.378001 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:39:15.378008 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:39:15.378045 | orchestrator | 2025-06-02 17:39:15.378057 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-06-02 17:39:15.378066 | orchestrator | Monday 02 June 2025 17:37:58 +0000 (0:00:00.960) 0:01:26.099 *********** 2025-06-02 17:39:15.378075 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:39:15.378083 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:39:15.378092 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:39:15.378099 | orchestrator | 2025-06-02 17:39:15.378108 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-06-02 17:39:15.378117 | orchestrator | Monday 02 June 2025 17:37:59 +0000 (0:00:00.764) 0:01:26.864 *********** 2025-06-02 17:39:15.378126 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:39:15.378135 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:39:15.378143 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:39:15.378151 | orchestrator | 2025-06-02 17:39:15.378159 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-06-02 17:39:15.378167 | orchestrator | Monday 02 June 2025 17:37:59 +0000 (0:00:00.764) 0:01:27.628 *********** 2025-06-02 17:39:15.378178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.378203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.378212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.378241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.378252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.378267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.378275 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.379633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.379678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.379687 | orchestrator | 2025-06-02 17:39:15.379696 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-06-02 17:39:15.379704 | orchestrator | Monday 02 June 2025 17:38:01 +0000 (0:00:01.774) 0:01:29.403 *********** 2025-06-02 17:39:15.379713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.379735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.379743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.379751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.379758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.379777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.379813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.379821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.379831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.379837 | orchestrator | 2025-06-02 17:39:15.379844 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-06-02 17:39:15.379850 | orchestrator | Monday 02 June 2025 17:38:06 +0000 (0:00:04.760) 0:01:34.163 *********** 2025-06-02 17:39:15.379857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.379864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.379878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.379885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.379892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.379906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.379913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.379920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.379931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.379937 | orchestrator | 2025-06-02 17:39:15.379944 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-02 17:39:15.379950 | orchestrator | Monday 02 June 2025 17:38:08 +0000 (0:00:02.431) 0:01:36.595 *********** 2025-06-02 17:39:15.379957 | orchestrator | 2025-06-02 17:39:15.379964 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-02 17:39:15.379970 | orchestrator | Monday 02 June 2025 17:38:09 +0000 (0:00:00.255) 0:01:36.850 *********** 2025-06-02 17:39:15.379977 | orchestrator | 2025-06-02 17:39:15.379984 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-02 17:39:15.379990 | orchestrator | Monday 02 June 2025 17:38:09 +0000 (0:00:00.280) 0:01:37.131 *********** 2025-06-02 17:39:15.379996 | orchestrator | 2025-06-02 17:39:15.380003 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-06-02 17:39:15.380009 | orchestrator | Monday 02 June 2025 17:38:09 +0000 (0:00:00.268) 0:01:37.400 *********** 2025-06-02 17:39:15.380015 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:39:15.380022 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:39:15.380028 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:39:15.380034 | orchestrator | 2025-06-02 17:39:15.380040 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-06-02 17:39:15.380046 | orchestrator | Monday 02 June 2025 17:38:18 +0000 (0:00:09.003) 0:01:46.403 *********** 2025-06-02 17:39:15.380052 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:39:15.380058 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:39:15.380064 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:39:15.380071 | orchestrator | 2025-06-02 17:39:15.380078 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-06-02 17:39:15.380084 | orchestrator | Monday 02 June 2025 17:38:26 +0000 (0:00:08.058) 0:01:54.462 *********** 2025-06-02 17:39:15.380091 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:39:15.380097 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:39:15.380110 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:39:15.380117 | orchestrator | 2025-06-02 17:39:15.380128 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-06-02 17:39:15.380135 | orchestrator | Monday 02 June 2025 17:38:34 +0000 (0:00:07.796) 0:02:02.259 *********** 2025-06-02 17:39:15.380141 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:39:15.380148 | orchestrator | 2025-06-02 17:39:15.380154 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-06-02 17:39:15.380161 | orchestrator | Monday 02 June 2025 17:38:34 +0000 (0:00:00.131) 0:02:02.391 *********** 2025-06-02 17:39:15.380167 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:39:15.380174 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:39:15.380180 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:39:15.380186 | orchestrator | 2025-06-02 17:39:15.380193 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-06-02 17:39:15.380200 | orchestrator | Monday 02 June 2025 17:38:35 +0000 (0:00:00.782) 0:02:03.173 *********** 2025-06-02 17:39:15.380207 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:39:15.380213 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:39:15.380219 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:39:15.380226 | orchestrator | 2025-06-02 17:39:15.380232 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-06-02 17:39:15.380239 | orchestrator | Monday 02 June 2025 17:38:36 +0000 (0:00:00.912) 0:02:04.085 *********** 2025-06-02 17:39:15.380246 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:39:15.380253 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:39:15.380259 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:39:15.380266 | orchestrator | 2025-06-02 17:39:15.380272 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-06-02 17:39:15.380278 | orchestrator | Monday 02 June 2025 17:38:37 +0000 (0:00:00.801) 0:02:04.886 *********** 2025-06-02 17:39:15.380284 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:39:15.380291 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:39:15.380296 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:39:15.380302 | orchestrator | 2025-06-02 17:39:15.380309 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-06-02 17:39:15.380316 | orchestrator | Monday 02 June 2025 17:38:37 +0000 (0:00:00.630) 0:02:05.517 *********** 2025-06-02 17:39:15.380323 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:39:15.380329 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:39:15.380335 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:39:15.380342 | orchestrator | 2025-06-02 17:39:15.380349 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-06-02 17:39:15.380355 | orchestrator | Monday 02 June 2025 17:38:38 +0000 (0:00:00.842) 0:02:06.359 *********** 2025-06-02 17:39:15.380362 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:39:15.380368 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:39:15.380374 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:39:15.380381 | orchestrator | 2025-06-02 17:39:15.380387 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-06-02 17:39:15.380394 | orchestrator | Monday 02 June 2025 17:38:39 +0000 (0:00:01.191) 0:02:07.551 *********** 2025-06-02 17:39:15.380400 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:39:15.380407 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:39:15.380413 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:39:15.380420 | orchestrator | 2025-06-02 17:39:15.380426 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-06-02 17:39:15.380432 | orchestrator | Monday 02 June 2025 17:38:40 +0000 (0:00:00.310) 0:02:07.861 *********** 2025-06-02 17:39:15.380440 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.380458 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.380465 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.380472 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.380485 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.380516 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.380524 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.380530 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.380536 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.380542 | orchestrator | 2025-06-02 17:39:15.380549 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-06-02 17:39:15.380556 | orchestrator | Monday 02 June 2025 17:38:41 +0000 (0:00:01.391) 0:02:09.252 *********** 2025-06-02 17:39:15.380563 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.380574 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.380585 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.380592 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.380598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.380611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.380618 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.380625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.380632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.380638 | orchestrator | 2025-06-02 17:39:15.380645 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-06-02 17:39:15.380652 | orchestrator | Monday 02 June 2025 17:38:45 +0000 (0:00:04.231) 0:02:13.484 *********** 2025-06-02 17:39:15.380658 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.380670 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.380686 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.380692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.380697 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.380703 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.380715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.380722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.380729 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:39:15.380735 | orchestrator | 2025-06-02 17:39:15.380742 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-02 17:39:15.380749 | orchestrator | Monday 02 June 2025 17:38:48 +0000 (0:00:03.156) 0:02:16.640 *********** 2025-06-02 17:39:15.380755 | orchestrator | 2025-06-02 17:39:15.380762 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-02 17:39:15.380768 | orchestrator | Monday 02 June 2025 17:38:48 +0000 (0:00:00.139) 0:02:16.779 *********** 2025-06-02 17:39:15.380780 | orchestrator | 2025-06-02 17:39:15.380787 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-02 17:39:15.380793 | orchestrator | Monday 02 June 2025 17:38:49 +0000 (0:00:00.130) 0:02:16.909 *********** 2025-06-02 17:39:15.380799 | orchestrator | 2025-06-02 17:39:15.380805 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-06-02 17:39:15.380811 | orchestrator | Monday 02 June 2025 17:38:49 +0000 (0:00:00.115) 0:02:17.025 *********** 2025-06-02 17:39:15.380817 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:39:15.380824 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:39:15.380830 | orchestrator | 2025-06-02 17:39:15.380836 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-06-02 17:39:15.380842 | orchestrator | Monday 02 June 2025 17:38:55 +0000 (0:00:06.431) 0:02:23.457 *********** 2025-06-02 17:39:15.380848 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:39:15.380854 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:39:15.380860 | orchestrator | 2025-06-02 17:39:15.380867 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-06-02 17:39:15.380874 | orchestrator | Monday 02 June 2025 17:39:01 +0000 (0:00:06.165) 0:02:29.622 *********** 2025-06-02 17:39:15.380880 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:39:15.380887 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:39:15.380893 | orchestrator | 2025-06-02 17:39:15.380900 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-06-02 17:39:15.380906 | orchestrator | Monday 02 June 2025 17:39:07 +0000 (0:00:06.196) 0:02:35.819 *********** 2025-06-02 17:39:15.380913 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:39:15.380920 | orchestrator | 2025-06-02 17:39:15.380927 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-06-02 17:39:15.380936 | orchestrator | Monday 02 June 2025 17:39:08 +0000 (0:00:00.158) 0:02:35.977 *********** 2025-06-02 17:39:15.380943 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:39:15.380950 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:39:15.380956 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:39:15.380962 | orchestrator | 2025-06-02 17:39:15.380969 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-06-02 17:39:15.380976 | orchestrator | Monday 02 June 2025 17:39:09 +0000 (0:00:01.159) 0:02:37.136 *********** 2025-06-02 17:39:15.380982 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:39:15.380987 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:39:15.380993 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:39:15.381000 | orchestrator | 2025-06-02 17:39:15.381006 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-06-02 17:39:15.381012 | orchestrator | Monday 02 June 2025 17:39:10 +0000 (0:00:00.715) 0:02:37.852 *********** 2025-06-02 17:39:15.381018 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:39:15.381025 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:39:15.381032 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:39:15.381039 | orchestrator | 2025-06-02 17:39:15.381045 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-06-02 17:39:15.381052 | orchestrator | Monday 02 June 2025 17:39:11 +0000 (0:00:01.027) 0:02:38.879 *********** 2025-06-02 17:39:15.381058 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:39:15.381065 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:39:15.381072 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:39:15.381079 | orchestrator | 2025-06-02 17:39:15.381086 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-06-02 17:39:15.381092 | orchestrator | Monday 02 June 2025 17:39:11 +0000 (0:00:00.698) 0:02:39.578 *********** 2025-06-02 17:39:15.381099 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:39:15.381107 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:39:15.381116 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:39:15.381123 | orchestrator | 2025-06-02 17:39:15.381129 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-06-02 17:39:15.381142 | orchestrator | Monday 02 June 2025 17:39:12 +0000 (0:00:01.022) 0:02:40.600 *********** 2025-06-02 17:39:15.381148 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:39:15.381154 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:39:15.381166 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:39:15.381172 | orchestrator | 2025-06-02 17:39:15.381178 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:39:15.381186 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-06-02 17:39:15.381193 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-06-02 17:39:15.381199 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-06-02 17:39:15.381205 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:39:15.381211 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:39:15.381217 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:39:15.381223 | orchestrator | 2025-06-02 17:39:15.381229 | orchestrator | 2025-06-02 17:39:15.381235 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:39:15.381241 | orchestrator | Monday 02 June 2025 17:39:13 +0000 (0:00:00.895) 0:02:41.495 *********** 2025-06-02 17:39:15.381247 | orchestrator | =============================================================================== 2025-06-02 17:39:15.381253 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 34.57s 2025-06-02 17:39:15.381259 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 19.39s 2025-06-02 17:39:15.381265 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 15.44s 2025-06-02 17:39:15.381271 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 14.22s 2025-06-02 17:39:15.381276 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.99s 2025-06-02 17:39:15.381281 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.76s 2025-06-02 17:39:15.381287 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.23s 2025-06-02 17:39:15.381293 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.16s 2025-06-02 17:39:15.381298 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.76s 2025-06-02 17:39:15.381304 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.54s 2025-06-02 17:39:15.381310 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.43s 2025-06-02 17:39:15.381316 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.92s 2025-06-02 17:39:15.381322 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.77s 2025-06-02 17:39:15.381328 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.61s 2025-06-02 17:39:15.381333 | orchestrator | ovn-db : Set bootstrap args fact for NB (new cluster) ------------------- 1.55s 2025-06-02 17:39:15.381339 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.55s 2025-06-02 17:39:15.381350 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.46s 2025-06-02 17:39:15.381356 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.44s 2025-06-02 17:39:15.381362 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.39s 2025-06-02 17:39:15.381369 | orchestrator | ovn-db : Remove an old node with the same ip address as the new node in NB DB --- 1.23s 2025-06-02 17:39:15.381385 | orchestrator | 2025-06-02 17:39:15 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:39:15.381392 | orchestrator | 2025-06-02 17:39:15 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:39:15.381398 | orchestrator | 2025-06-02 17:39:15 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:39:18.423278 | orchestrator | 2025-06-02 17:39:18 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:39:18.424415 | orchestrator | 2025-06-02 17:39:18 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:39:18.424464 | orchestrator | 2025-06-02 17:39:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:39:21.466076 | orchestrator | 2025-06-02 17:39:21 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:39:21.467569 | orchestrator | 2025-06-02 17:39:21 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:39:21.467621 | orchestrator | 2025-06-02 17:39:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:39:24.517258 | orchestrator | 2025-06-02 17:39:24 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:39:24.518166 | orchestrator | 2025-06-02 17:39:24 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:39:24.518204 | orchestrator | 2025-06-02 17:39:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:39:27.570700 | orchestrator | 2025-06-02 17:39:27 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:39:27.570895 | orchestrator | 2025-06-02 17:39:27 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:39:27.570941 | orchestrator | 2025-06-02 17:39:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:39:30.607207 | orchestrator | 2025-06-02 17:39:30 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:39:30.610473 | orchestrator | 2025-06-02 17:39:30 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:39:30.610585 | orchestrator | 2025-06-02 17:39:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:39:33.655158 | orchestrator | 2025-06-02 17:39:33 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:39:33.658390 | orchestrator | 2025-06-02 17:39:33 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:39:33.658869 | orchestrator | 2025-06-02 17:39:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:39:36.703872 | orchestrator | 2025-06-02 17:39:36 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:39:36.704628 | orchestrator | 2025-06-02 17:39:36 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:39:36.705348 | orchestrator | 2025-06-02 17:39:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:39:39.758513 | orchestrator | 2025-06-02 17:39:39 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:39:39.759274 | orchestrator | 2025-06-02 17:39:39 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:39:39.759440 | orchestrator | 2025-06-02 17:39:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:39:42.808838 | orchestrator | 2025-06-02 17:39:42 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:39:42.813286 | orchestrator | 2025-06-02 17:39:42 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:39:42.813487 | orchestrator | 2025-06-02 17:39:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:39:45.868037 | orchestrator | 2025-06-02 17:39:45 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:39:45.871001 | orchestrator | 2025-06-02 17:39:45 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:39:45.871091 | orchestrator | 2025-06-02 17:39:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:39:48.921105 | orchestrator | 2025-06-02 17:39:48 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:39:48.922993 | orchestrator | 2025-06-02 17:39:48 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:39:48.923039 | orchestrator | 2025-06-02 17:39:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:39:51.971232 | orchestrator | 2025-06-02 17:39:51 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:39:51.972882 | orchestrator | 2025-06-02 17:39:51 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:39:51.972937 | orchestrator | 2025-06-02 17:39:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:39:55.027822 | orchestrator | 2025-06-02 17:39:55 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:39:55.027957 | orchestrator | 2025-06-02 17:39:55 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:39:55.027984 | orchestrator | 2025-06-02 17:39:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:39:58.081232 | orchestrator | 2025-06-02 17:39:58 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:39:58.083764 | orchestrator | 2025-06-02 17:39:58 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:39:58.083804 | orchestrator | 2025-06-02 17:39:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:40:01.131978 | orchestrator | 2025-06-02 17:40:01 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:40:01.135111 | orchestrator | 2025-06-02 17:40:01 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:40:01.135179 | orchestrator | 2025-06-02 17:40:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:40:04.186747 | orchestrator | 2025-06-02 17:40:04 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:40:04.187961 | orchestrator | 2025-06-02 17:40:04 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:40:04.187992 | orchestrator | 2025-06-02 17:40:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:40:07.242930 | orchestrator | 2025-06-02 17:40:07 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:40:07.244072 | orchestrator | 2025-06-02 17:40:07 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:40:07.244104 | orchestrator | 2025-06-02 17:40:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:40:10.299234 | orchestrator | 2025-06-02 17:40:10 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:40:10.301013 | orchestrator | 2025-06-02 17:40:10 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:40:10.301098 | orchestrator | 2025-06-02 17:40:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:40:13.351691 | orchestrator | 2025-06-02 17:40:13 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:40:13.353204 | orchestrator | 2025-06-02 17:40:13 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:40:13.353249 | orchestrator | 2025-06-02 17:40:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:40:16.402158 | orchestrator | 2025-06-02 17:40:16 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:40:16.402929 | orchestrator | 2025-06-02 17:40:16 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:40:16.402964 | orchestrator | 2025-06-02 17:40:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:40:19.457838 | orchestrator | 2025-06-02 17:40:19 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:40:19.463538 | orchestrator | 2025-06-02 17:40:19 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:40:19.463601 | orchestrator | 2025-06-02 17:40:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:40:22.505658 | orchestrator | 2025-06-02 17:40:22 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:40:22.508289 | orchestrator | 2025-06-02 17:40:22 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:40:22.508359 | orchestrator | 2025-06-02 17:40:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:40:25.561074 | orchestrator | 2025-06-02 17:40:25 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:40:25.561238 | orchestrator | 2025-06-02 17:40:25 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:40:25.561266 | orchestrator | 2025-06-02 17:40:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:40:28.605754 | orchestrator | 2025-06-02 17:40:28 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:40:28.607953 | orchestrator | 2025-06-02 17:40:28 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:40:28.608060 | orchestrator | 2025-06-02 17:40:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:40:31.663987 | orchestrator | 2025-06-02 17:40:31 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:40:31.664060 | orchestrator | 2025-06-02 17:40:31 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:40:31.664067 | orchestrator | 2025-06-02 17:40:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:40:34.716126 | orchestrator | 2025-06-02 17:40:34 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:40:34.717788 | orchestrator | 2025-06-02 17:40:34 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:40:34.717807 | orchestrator | 2025-06-02 17:40:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:40:37.769956 | orchestrator | 2025-06-02 17:40:37 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:40:37.770179 | orchestrator | 2025-06-02 17:40:37 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:40:37.770209 | orchestrator | 2025-06-02 17:40:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:40:40.819912 | orchestrator | 2025-06-02 17:40:40 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:40:40.821166 | orchestrator | 2025-06-02 17:40:40 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:40:40.825616 | orchestrator | 2025-06-02 17:40:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:40:43.867073 | orchestrator | 2025-06-02 17:40:43 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:40:43.873711 | orchestrator | 2025-06-02 17:40:43 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:40:43.873761 | orchestrator | 2025-06-02 17:40:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:40:46.931951 | orchestrator | 2025-06-02 17:40:46 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:40:46.934007 | orchestrator | 2025-06-02 17:40:46 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:40:46.934099 | orchestrator | 2025-06-02 17:40:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:40:49.978128 | orchestrator | 2025-06-02 17:40:49 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:40:49.980924 | orchestrator | 2025-06-02 17:40:49 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:40:49.980958 | orchestrator | 2025-06-02 17:40:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:40:53.039671 | orchestrator | 2025-06-02 17:40:53 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:40:53.039802 | orchestrator | 2025-06-02 17:40:53 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:40:53.039818 | orchestrator | 2025-06-02 17:40:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:40:56.090436 | orchestrator | 2025-06-02 17:40:56 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:40:56.090795 | orchestrator | 2025-06-02 17:40:56 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:40:56.090823 | orchestrator | 2025-06-02 17:40:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:40:59.134875 | orchestrator | 2025-06-02 17:40:59 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:40:59.138483 | orchestrator | 2025-06-02 17:40:59 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:40:59.138576 | orchestrator | 2025-06-02 17:40:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:41:02.183136 | orchestrator | 2025-06-02 17:41:02 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:41:02.184844 | orchestrator | 2025-06-02 17:41:02 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:41:02.184902 | orchestrator | 2025-06-02 17:41:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:41:05.228062 | orchestrator | 2025-06-02 17:41:05 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:41:05.229838 | orchestrator | 2025-06-02 17:41:05 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:41:05.229890 | orchestrator | 2025-06-02 17:41:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:41:08.279930 | orchestrator | 2025-06-02 17:41:08 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:41:08.283515 | orchestrator | 2025-06-02 17:41:08 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:41:08.283589 | orchestrator | 2025-06-02 17:41:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:41:11.329695 | orchestrator | 2025-06-02 17:41:11 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:41:11.332433 | orchestrator | 2025-06-02 17:41:11 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:41:11.332464 | orchestrator | 2025-06-02 17:41:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:41:14.395390 | orchestrator | 2025-06-02 17:41:14 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:41:14.396868 | orchestrator | 2025-06-02 17:41:14 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:41:14.396918 | orchestrator | 2025-06-02 17:41:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:41:17.444902 | orchestrator | 2025-06-02 17:41:17 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:41:17.447115 | orchestrator | 2025-06-02 17:41:17 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:41:17.447221 | orchestrator | 2025-06-02 17:41:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:41:20.489234 | orchestrator | 2025-06-02 17:41:20 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:41:20.489383 | orchestrator | 2025-06-02 17:41:20 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:41:20.490081 | orchestrator | 2025-06-02 17:41:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:41:23.540630 | orchestrator | 2025-06-02 17:41:23 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:41:23.542689 | orchestrator | 2025-06-02 17:41:23 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:41:23.542780 | orchestrator | 2025-06-02 17:41:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:41:26.589097 | orchestrator | 2025-06-02 17:41:26 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:41:26.589476 | orchestrator | 2025-06-02 17:41:26 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:41:26.589507 | orchestrator | 2025-06-02 17:41:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:41:29.642354 | orchestrator | 2025-06-02 17:41:29 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:41:29.642838 | orchestrator | 2025-06-02 17:41:29 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:41:29.642872 | orchestrator | 2025-06-02 17:41:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:41:32.685148 | orchestrator | 2025-06-02 17:41:32 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:41:32.685483 | orchestrator | 2025-06-02 17:41:32 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:41:32.685510 | orchestrator | 2025-06-02 17:41:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:41:35.736756 | orchestrator | 2025-06-02 17:41:35 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:41:35.737685 | orchestrator | 2025-06-02 17:41:35 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:41:35.737723 | orchestrator | 2025-06-02 17:41:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:41:38.791832 | orchestrator | 2025-06-02 17:41:38 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:41:38.792378 | orchestrator | 2025-06-02 17:41:38 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:41:38.793259 | orchestrator | 2025-06-02 17:41:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:41:41.847438 | orchestrator | 2025-06-02 17:41:41 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:41:41.849328 | orchestrator | 2025-06-02 17:41:41 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:41:41.849445 | orchestrator | 2025-06-02 17:41:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:41:44.896458 | orchestrator | 2025-06-02 17:41:44 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:41:44.897565 | orchestrator | 2025-06-02 17:41:44 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:41:44.897632 | orchestrator | 2025-06-02 17:41:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:41:47.947014 | orchestrator | 2025-06-02 17:41:47 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:41:47.947918 | orchestrator | 2025-06-02 17:41:47 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:41:47.948225 | orchestrator | 2025-06-02 17:41:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:41:51.003555 | orchestrator | 2025-06-02 17:41:51 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:41:51.005326 | orchestrator | 2025-06-02 17:41:51 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state STARTED 2025-06-02 17:41:51.005395 | orchestrator | 2025-06-02 17:41:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:41:54.053579 | orchestrator | 2025-06-02 17:41:54 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:41:54.065344 | orchestrator | 2025-06-02 17:41:54 | INFO  | Task acd4832e-f84e-49bc-bd61-c03416a5926b is in state SUCCESS 2025-06-02 17:41:54.067748 | orchestrator | 2025-06-02 17:41:54.067808 | orchestrator | 2025-06-02 17:41:54.067821 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 17:41:54.067834 | orchestrator | 2025-06-02 17:41:54.067845 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 17:41:54.067856 | orchestrator | Monday 02 June 2025 17:35:14 +0000 (0:00:00.274) 0:00:00.274 *********** 2025-06-02 17:41:54.067867 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:41:54.067879 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:41:54.067890 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:41:54.067900 | orchestrator | 2025-06-02 17:41:54.067911 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 17:41:54.067992 | orchestrator | Monday 02 June 2025 17:35:14 +0000 (0:00:00.298) 0:00:00.572 *********** 2025-06-02 17:41:54.068007 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-06-02 17:41:54.068019 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-06-02 17:41:54.068030 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-06-02 17:41:54.068139 | orchestrator | 2025-06-02 17:41:54.068152 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-06-02 17:41:54.068163 | orchestrator | 2025-06-02 17:41:54.068175 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-06-02 17:41:54.068186 | orchestrator | Monday 02 June 2025 17:35:14 +0000 (0:00:00.409) 0:00:00.981 *********** 2025-06-02 17:41:54.068198 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:41:54.068213 | orchestrator | 2025-06-02 17:41:54.068231 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-06-02 17:41:54.068294 | orchestrator | Monday 02 June 2025 17:35:15 +0000 (0:00:00.934) 0:00:01.916 *********** 2025-06-02 17:41:54.068313 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:41:54.068325 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:41:54.068336 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:41:54.068346 | orchestrator | 2025-06-02 17:41:54.068358 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-06-02 17:41:54.068396 | orchestrator | Monday 02 June 2025 17:35:17 +0000 (0:00:02.105) 0:00:04.021 *********** 2025-06-02 17:41:54.068408 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:41:54.068419 | orchestrator | 2025-06-02 17:41:54.068430 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-06-02 17:41:54.068440 | orchestrator | Monday 02 June 2025 17:35:19 +0000 (0:00:01.680) 0:00:05.701 *********** 2025-06-02 17:41:54.068451 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:41:54.068462 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:41:54.068472 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:41:54.068483 | orchestrator | 2025-06-02 17:41:54.068494 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-06-02 17:41:54.068505 | orchestrator | Monday 02 June 2025 17:35:20 +0000 (0:00:01.080) 0:00:06.781 *********** 2025-06-02 17:41:54.068516 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-02 17:41:54.068526 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-02 17:41:54.068537 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-02 17:41:54.068548 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-02 17:41:54.068558 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-02 17:41:54.068583 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-02 17:41:54.068595 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-02 17:41:54.068607 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-02 17:41:54.068667 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-02 17:41:54.068680 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-02 17:41:54.068692 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-02 17:41:54.068703 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-02 17:41:54.068714 | orchestrator | 2025-06-02 17:41:54.068724 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-06-02 17:41:54.068735 | orchestrator | Monday 02 June 2025 17:35:24 +0000 (0:00:03.627) 0:00:10.409 *********** 2025-06-02 17:41:54.068746 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-06-02 17:41:54.068757 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-06-02 17:41:54.068768 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-06-02 17:41:54.068779 | orchestrator | 2025-06-02 17:41:54.068790 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-06-02 17:41:54.068854 | orchestrator | Monday 02 June 2025 17:35:25 +0000 (0:00:01.412) 0:00:11.821 *********** 2025-06-02 17:41:54.068867 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-06-02 17:41:54.068879 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-06-02 17:41:54.068889 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-06-02 17:41:54.068900 | orchestrator | 2025-06-02 17:41:54.068911 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-06-02 17:41:54.068922 | orchestrator | Monday 02 June 2025 17:35:27 +0000 (0:00:01.941) 0:00:13.763 *********** 2025-06-02 17:41:54.068933 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-06-02 17:41:54.068944 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.068973 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-06-02 17:41:54.068984 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.068995 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-06-02 17:41:54.069014 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.069025 | orchestrator | 2025-06-02 17:41:54.069036 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-06-02 17:41:54.069047 | orchestrator | Monday 02 June 2025 17:35:28 +0000 (0:00:01.011) 0:00:14.774 *********** 2025-06-02 17:41:54.069061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-02 17:41:54.069080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-02 17:41:54.069092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-02 17:41:54.069110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 17:41:54.069123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 17:41:54.069141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 17:41:54.069160 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 17:41:54.069172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 17:41:54.069184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 17:41:54.069231 | orchestrator | 2025-06-02 17:41:54.069376 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-06-02 17:41:54.069394 | orchestrator | Monday 02 June 2025 17:35:31 +0000 (0:00:02.834) 0:00:17.608 *********** 2025-06-02 17:41:54.069405 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:41:54.069416 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:41:54.069427 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:41:54.069437 | orchestrator | 2025-06-02 17:41:54.069529 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-06-02 17:41:54.069541 | orchestrator | Monday 02 June 2025 17:35:32 +0000 (0:00:01.266) 0:00:18.875 *********** 2025-06-02 17:41:54.069552 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-06-02 17:41:54.069563 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-06-02 17:41:54.069573 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-06-02 17:41:54.069584 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-06-02 17:41:54.069595 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-06-02 17:41:54.069605 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-06-02 17:41:54.069616 | orchestrator | 2025-06-02 17:41:54.069633 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-06-02 17:41:54.069644 | orchestrator | Monday 02 June 2025 17:35:35 +0000 (0:00:03.042) 0:00:21.917 *********** 2025-06-02 17:41:54.069655 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:41:54.069666 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:41:54.069677 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:41:54.069687 | orchestrator | 2025-06-02 17:41:54.069698 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-06-02 17:41:54.069709 | orchestrator | Monday 02 June 2025 17:35:39 +0000 (0:00:03.496) 0:00:25.413 *********** 2025-06-02 17:41:54.069719 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:41:54.069730 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:41:54.069741 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:41:54.069751 | orchestrator | 2025-06-02 17:41:54.069762 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-06-02 17:41:54.069784 | orchestrator | Monday 02 June 2025 17:35:41 +0000 (0:00:02.526) 0:00:27.940 *********** 2025-06-02 17:41:54.069796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 17:41:54.069819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 17:41:54.069832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 17:41:54.069843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:41:54.069855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 17:41:54.069871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 17:41:54.069890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 17:41:54.069949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e392cfd087b515112f0a0930ba0dd202b0f57ff6', '__omit_place_holder__e392cfd087b515112f0a0930ba0dd202b0f57ff6'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-02 17:41:54.069962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:41:54.069973 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.069985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e392cfd087b515112f0a0930ba0dd202b0f57ff6', '__omit_place_holder__e392cfd087b515112f0a0930ba0dd202b0f57ff6'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-02 17:41:54.069996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:41:54.070008 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.070080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e392cfd087b515112f0a0930ba0dd202b0f57ff6', '__omit_place_holder__e392cfd087b515112f0a0930ba0dd202b0f57ff6'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-02 17:41:54.070139 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.070151 | orchestrator | 2025-06-02 17:41:54.070162 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-06-02 17:41:54.070173 | orchestrator | Monday 02 June 2025 17:35:42 +0000 (0:00:00.539) 0:00:28.479 *********** 2025-06-02 17:41:54.070185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-02 17:41:54.070205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-02 17:41:54.070217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-02 17:41:54.070229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 17:41:54.070293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 17:41:54.070308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:41:54.070327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:41:54.070339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e392cfd087b515112f0a0930ba0dd202b0f57ff6', '__omit_place_holder__e392cfd087b515112f0a0930ba0dd202b0f57ff6'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-02 17:41:54.070391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e392cfd087b515112f0a0930ba0dd202b0f57ff6', '__omit_place_holder__e392cfd087b515112f0a0930ba0dd202b0f57ff6'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-02 17:41:54.070405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 17:41:54.070417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:41:54.070428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e392cfd087b515112f0a0930ba0dd202b0f57ff6', '__omit_place_holder__e392cfd087b515112f0a0930ba0dd202b0f57ff6'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-02 17:41:54.070446 | orchestrator | 2025-06-02 17:41:54.070457 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-06-02 17:41:54.070508 | orchestrator | Monday 02 June 2025 17:35:46 +0000 (0:00:04.150) 0:00:32.630 *********** 2025-06-02 17:41:54.070533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-02 17:41:54.070545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-02 17:41:54.070567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-02 17:41:54.070579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 17:41:54.070591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 17:41:54.070602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 17:41:54.070625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 17:41:54.070637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 17:41:54.070648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 17:41:54.070659 | orchestrator | 2025-06-02 17:41:54.070670 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-06-02 17:41:54.070681 | orchestrator | Monday 02 June 2025 17:35:50 +0000 (0:00:03.897) 0:00:36.528 *********** 2025-06-02 17:41:54.070692 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-02 17:41:54.071495 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-02 17:41:54.071589 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-02 17:41:54.071605 | orchestrator | 2025-06-02 17:41:54.071618 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-06-02 17:41:54.071630 | orchestrator | Monday 02 June 2025 17:35:52 +0000 (0:00:01.950) 0:00:38.479 *********** 2025-06-02 17:41:54.071642 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-02 17:41:54.071653 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-02 17:41:54.071664 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-02 17:41:54.071675 | orchestrator | 2025-06-02 17:41:54.071686 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-06-02 17:41:54.071697 | orchestrator | Monday 02 June 2025 17:35:56 +0000 (0:00:04.698) 0:00:43.178 *********** 2025-06-02 17:41:54.071709 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.071720 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.071731 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.071742 | orchestrator | 2025-06-02 17:41:54.071754 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-06-02 17:41:54.071765 | orchestrator | Monday 02 June 2025 17:35:58 +0000 (0:00:01.696) 0:00:44.874 *********** 2025-06-02 17:41:54.071776 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-02 17:41:54.071813 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-02 17:41:54.071825 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-02 17:41:54.071836 | orchestrator | 2025-06-02 17:41:54.071848 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-06-02 17:41:54.071859 | orchestrator | Monday 02 June 2025 17:36:01 +0000 (0:00:02.421) 0:00:47.296 *********** 2025-06-02 17:41:54.071870 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-02 17:41:54.071881 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-02 17:41:54.071893 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-02 17:41:54.071904 | orchestrator | 2025-06-02 17:41:54.071916 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-06-02 17:41:54.071927 | orchestrator | Monday 02 June 2025 17:36:02 +0000 (0:00:01.895) 0:00:49.192 *********** 2025-06-02 17:41:54.071938 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-06-02 17:41:54.071950 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-06-02 17:41:54.071961 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-06-02 17:41:54.071972 | orchestrator | 2025-06-02 17:41:54.071983 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-06-02 17:41:54.072006 | orchestrator | Monday 02 June 2025 17:36:04 +0000 (0:00:01.395) 0:00:50.588 *********** 2025-06-02 17:41:54.072019 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-06-02 17:41:54.072032 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-06-02 17:41:54.072045 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-06-02 17:41:54.072058 | orchestrator | 2025-06-02 17:41:54.072072 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-06-02 17:41:54.072085 | orchestrator | Monday 02 June 2025 17:36:06 +0000 (0:00:01.973) 0:00:52.562 *********** 2025-06-02 17:41:54.072097 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:41:54.072111 | orchestrator | 2025-06-02 17:41:54.072123 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-06-02 17:41:54.072137 | orchestrator | Monday 02 June 2025 17:36:07 +0000 (0:00:00.785) 0:00:53.348 *********** 2025-06-02 17:41:54.072152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-02 17:41:54.072216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-02 17:41:54.072268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-02 17:41:54.072284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 17:41:54.072299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 17:41:54.072319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 17:41:54.072332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 17:41:54.072346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 17:41:54.072365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 17:41:54.072385 | orchestrator | 2025-06-02 17:41:54.072397 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-06-02 17:41:54.072409 | orchestrator | Monday 02 June 2025 17:36:10 +0000 (0:00:03.803) 0:00:57.151 *********** 2025-06-02 17:41:54.072422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 17:41:54.072434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 17:41:54.072445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:41:54.072458 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.072474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 17:41:54.072486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 17:41:54.072505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:41:54.072525 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.072538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 17:41:54.072549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 17:41:54.072561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:41:54.072573 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.072584 | orchestrator | 2025-06-02 17:41:54.072595 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-06-02 17:41:54.072606 | orchestrator | Monday 02 June 2025 17:36:11 +0000 (0:00:00.588) 0:00:57.739 *********** 2025-06-02 17:41:54.072623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 17:41:54.072635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 17:41:54.072660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:41:54.072672 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.072684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 17:41:54.072696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 17:41:54.072708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:41:54.072719 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.072736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 17:41:54.072749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 17:41:54.072760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:41:54.072778 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.072789 | orchestrator | 2025-06-02 17:41:54.072801 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-06-02 17:41:54.072812 | orchestrator | Monday 02 June 2025 17:36:13 +0000 (0:00:01.645) 0:00:59.384 *********** 2025-06-02 17:41:54.072830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 17:41:54.072843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 17:41:54.072855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:41:54.072866 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.072877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 17:41:54.072898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 17:41:54.072930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:41:54.072948 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.072967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 17:41:54.072984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 17:41:54.073003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:41:54.073021 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.073040 | orchestrator | 2025-06-02 17:41:54.073058 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-06-02 17:41:54.073078 | orchestrator | Monday 02 June 2025 17:36:14 +0000 (0:00:00.901) 0:01:00.286 *********** 2025-06-02 17:41:54.073097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 17:41:54.073136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 17:41:54.073198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:41:54.073220 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.073298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 17:41:54.073326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 17:41:54.073339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:41:54.073351 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.073363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 17:41:54.073375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 17:41:54.073403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:41:54.073416 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.073426 | orchestrator | 2025-06-02 17:41:54.073438 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-06-02 17:41:54.073449 | orchestrator | Monday 02 June 2025 17:36:14 +0000 (0:00:00.803) 0:01:01.089 *********** 2025-06-02 17:41:54.073461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 17:41:54.073480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 17:41:54.073493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:41:54.073504 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.073517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 17:41:54.073529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 17:41:54.073554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:41:54.073566 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.073577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 17:41:54.073595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 17:41:54.073610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:41:54.073629 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.073649 | orchestrator | 2025-06-02 17:41:54.073670 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-06-02 17:41:54.073689 | orchestrator | Monday 02 June 2025 17:36:16 +0000 (0:00:01.458) 0:01:02.548 *********** 2025-06-02 17:41:54.073708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 17:41:54.073728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 17:41:54.073766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:41:54.073788 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.073807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 17:41:54.073827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 17:41:54.073861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:41:54.073882 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.073895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 17:41:54.073907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 17:41:54.073927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:41:54.073939 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.073951 | orchestrator | 2025-06-02 17:41:54.073962 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-06-02 17:41:54.073973 | orchestrator | Monday 02 June 2025 17:36:16 +0000 (0:00:00.618) 0:01:03.166 *********** 2025-06-02 17:41:54.074082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 17:41:54.074104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 17:41:54.074127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:41:54.074139 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.074151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 17:41:54.074163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 17:41:54.074184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:41:54.074196 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.074213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 17:41:54.074225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 17:41:54.074264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:41:54.074278 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.074289 | orchestrator | 2025-06-02 17:41:54.074301 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-06-02 17:41:54.074319 | orchestrator | Monday 02 June 2025 17:36:17 +0000 (0:00:00.779) 0:01:03.945 *********** 2025-06-02 17:41:54.074331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 17:41:54.074343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 17:41:54.074363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:41:54.074375 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.074391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 17:41:54.074404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 17:41:54.074415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:41:54.074426 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.074447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 17:41:54.074467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 17:41:54.074496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:41:54.074516 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.074535 | orchestrator | 2025-06-02 17:41:54.074549 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-06-02 17:41:54.074560 | orchestrator | Monday 02 June 2025 17:36:19 +0000 (0:00:01.757) 0:01:05.703 *********** 2025-06-02 17:41:54.074572 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-02 17:41:54.074583 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-02 17:41:54.074594 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-02 17:41:54.074605 | orchestrator | 2025-06-02 17:41:54.074616 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-06-02 17:41:54.074629 | orchestrator | Monday 02 June 2025 17:36:22 +0000 (0:00:02.845) 0:01:08.548 *********** 2025-06-02 17:41:54.074648 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-02 17:41:54.074668 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-02 17:41:54.074701 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-02 17:41:54.074720 | orchestrator | 2025-06-02 17:41:54.074732 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-06-02 17:41:54.074744 | orchestrator | Monday 02 June 2025 17:36:23 +0000 (0:00:01.537) 0:01:10.086 *********** 2025-06-02 17:41:54.074755 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-02 17:41:54.074767 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-02 17:41:54.074778 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-02 17:41:54.074789 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.074800 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-02 17:41:54.074811 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-02 17:41:54.074822 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.074833 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-02 17:41:54.074843 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.074855 | orchestrator | 2025-06-02 17:41:54.074865 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-06-02 17:41:54.074876 | orchestrator | Monday 02 June 2025 17:36:25 +0000 (0:00:01.235) 0:01:11.321 *********** 2025-06-02 17:41:54.074896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-02 17:41:54.074917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-02 17:41:54.074929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-02 17:41:54.074940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 17:41:54.074957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 17:41:54.074969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 17:41:54.074980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 17:41:54.075005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 17:41:54.075017 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 17:41:54.075029 | orchestrator | 2025-06-02 17:41:54.075040 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-06-02 17:41:54.075051 | orchestrator | Monday 02 June 2025 17:36:27 +0000 (0:00:02.880) 0:01:14.202 *********** 2025-06-02 17:41:54.075062 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:41:54.075074 | orchestrator | 2025-06-02 17:41:54.075085 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-06-02 17:41:54.075097 | orchestrator | Monday 02 June 2025 17:36:28 +0000 (0:00:00.786) 0:01:14.988 *********** 2025-06-02 17:41:54.075110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-02 17:41:54.075127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-02 17:41:54.075140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.075151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.075186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-02 17:41:54.075200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-02 17:41:54.075212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-02 17:41:54.075282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-02 17:41:54.075298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.075319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.075339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.075351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.075363 | orchestrator | 2025-06-02 17:41:54.075374 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-06-02 17:41:54.075386 | orchestrator | Monday 02 June 2025 17:36:32 +0000 (0:00:03.572) 0:01:18.560 *********** 2025-06-02 17:41:54.075398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-02 17:41:54.075436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-02 17:41:54.075450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-02 17:41:54.075476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-02 17:41:54.075489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.075503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-02 17:41:54.075514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.075543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.075556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-02 17:41:54.075574 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.075585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.075606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.075617 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.075632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.075654 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.075675 | orchestrator | 2025-06-02 17:41:54.075696 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-06-02 17:41:54.075716 | orchestrator | Monday 02 June 2025 17:36:33 +0000 (0:00:00.701) 0:01:19.261 *********** 2025-06-02 17:41:54.075737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-02 17:41:54.075758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-02 17:41:54.075779 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.075792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-02 17:41:54.075803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-02 17:41:54.075815 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.075833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-02 17:41:54.075853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-02 17:41:54.075893 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.075907 | orchestrator | 2025-06-02 17:41:54.075918 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-06-02 17:41:54.075929 | orchestrator | Monday 02 June 2025 17:36:34 +0000 (0:00:01.075) 0:01:20.337 *********** 2025-06-02 17:41:54.075939 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:41:54.075951 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:41:54.075962 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:41:54.075973 | orchestrator | 2025-06-02 17:41:54.075983 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-06-02 17:41:54.075994 | orchestrator | Monday 02 June 2025 17:36:35 +0000 (0:00:01.329) 0:01:21.666 *********** 2025-06-02 17:41:54.076005 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:41:54.076016 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:41:54.076027 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:41:54.076038 | orchestrator | 2025-06-02 17:41:54.076049 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-06-02 17:41:54.076060 | orchestrator | Monday 02 June 2025 17:36:37 +0000 (0:00:02.318) 0:01:23.985 *********** 2025-06-02 17:41:54.076071 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:41:54.076081 | orchestrator | 2025-06-02 17:41:54.076092 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-06-02 17:41:54.076103 | orchestrator | Monday 02 June 2025 17:36:38 +0000 (0:00:00.845) 0:01:24.831 *********** 2025-06-02 17:41:54.076124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 17:41:54.076138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.076150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.076167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 17:41:54.076191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.076203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.076222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 17:41:54.076233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.076273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.076293 | orchestrator | 2025-06-02 17:41:54.076305 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-06-02 17:41:54.076316 | orchestrator | Monday 02 June 2025 17:36:43 +0000 (0:00:04.864) 0:01:29.695 *********** 2025-06-02 17:41:54.076351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 17:41:54.076364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.076383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.076395 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.076407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 17:41:54.076419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.076443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 17:41:54.076456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.076468 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.076485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.076497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.076509 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.076520 | orchestrator | 2025-06-02 17:41:54.076532 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-06-02 17:41:54.076543 | orchestrator | Monday 02 June 2025 17:36:44 +0000 (0:00:00.821) 0:01:30.517 *********** 2025-06-02 17:41:54.076555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-02 17:41:54.076573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-02 17:41:54.076585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-02 17:41:54.076597 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.076609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-02 17:41:54.076621 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.076633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-02 17:41:54.076644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-02 17:41:54.076656 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.076667 | orchestrator | 2025-06-02 17:41:54.076679 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-06-02 17:41:54.076715 | orchestrator | Monday 02 June 2025 17:36:45 +0000 (0:00:00.862) 0:01:31.379 *********** 2025-06-02 17:41:54.076727 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:41:54.076744 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:41:54.076755 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:41:54.076767 | orchestrator | 2025-06-02 17:41:54.076778 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-06-02 17:41:54.076789 | orchestrator | Monday 02 June 2025 17:36:47 +0000 (0:00:02.219) 0:01:33.599 *********** 2025-06-02 17:41:54.076800 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:41:54.076811 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:41:54.076823 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:41:54.076834 | orchestrator | 2025-06-02 17:41:54.076845 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-06-02 17:41:54.076857 | orchestrator | Monday 02 June 2025 17:36:50 +0000 (0:00:02.866) 0:01:36.466 *********** 2025-06-02 17:41:54.076868 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.076879 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.076890 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.076900 | orchestrator | 2025-06-02 17:41:54.076911 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-06-02 17:41:54.076922 | orchestrator | Monday 02 June 2025 17:36:50 +0000 (0:00:00.311) 0:01:36.778 *********** 2025-06-02 17:41:54.076934 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:41:54.076945 | orchestrator | 2025-06-02 17:41:54.076956 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-06-02 17:41:54.076968 | orchestrator | Monday 02 June 2025 17:36:51 +0000 (0:00:00.673) 0:01:37.451 *********** 2025-06-02 17:41:54.076989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-02 17:41:54.077009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-02 17:41:54.077021 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-02 17:41:54.077033 | orchestrator | 2025-06-02 17:41:54.077044 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-06-02 17:41:54.077055 | orchestrator | Monday 02 June 2025 17:36:54 +0000 (0:00:02.916) 0:01:40.367 *********** 2025-06-02 17:41:54.077072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-02 17:41:54.077084 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.077096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-02 17:41:54.077107 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.077132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-02 17:41:54.077144 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.077155 | orchestrator | 2025-06-02 17:41:54.077166 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-06-02 17:41:54.077177 | orchestrator | Monday 02 June 2025 17:36:55 +0000 (0:00:01.745) 0:01:42.112 *********** 2025-06-02 17:41:54.077188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-02 17:41:54.077202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-02 17:41:54.077215 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.077227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-02 17:41:54.077318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-02 17:41:54.077350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-02 17:41:54.077369 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.077381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-02 17:41:54.077392 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.077412 | orchestrator | 2025-06-02 17:41:54.077423 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-06-02 17:41:54.077434 | orchestrator | Monday 02 June 2025 17:36:57 +0000 (0:00:01.871) 0:01:43.984 *********** 2025-06-02 17:41:54.077445 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.077455 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.077466 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.077477 | orchestrator | 2025-06-02 17:41:54.077487 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-06-02 17:41:54.077497 | orchestrator | Monday 02 June 2025 17:36:58 +0000 (0:00:00.885) 0:01:44.870 *********** 2025-06-02 17:41:54.077507 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.077516 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.077526 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.077536 | orchestrator | 2025-06-02 17:41:54.077545 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-06-02 17:41:54.077563 | orchestrator | Monday 02 June 2025 17:36:59 +0000 (0:00:01.275) 0:01:46.145 *********** 2025-06-02 17:41:54.077573 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:41:54.077583 | orchestrator | 2025-06-02 17:41:54.077592 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-06-02 17:41:54.077602 | orchestrator | Monday 02 June 2025 17:37:00 +0000 (0:00:00.802) 0:01:46.947 *********** 2025-06-02 17:41:54.077612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 17:41:54.077624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.077640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.077651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.077673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 17:41:54.077684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.077694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.077705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.077719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 17:41:54.077740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.077756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.077766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.077776 | orchestrator | 2025-06-02 17:41:54.077786 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-06-02 17:41:54.077796 | orchestrator | Monday 02 June 2025 17:37:04 +0000 (0:00:03.933) 0:01:50.881 *********** 2025-06-02 17:41:54.077806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 17:41:54.077828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.077839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.077855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.077865 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.077876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 17:41:54.077886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.079424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.079506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.079518 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.079542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 17:41:54.079554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.079564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.079587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.079597 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.079613 | orchestrator | 2025-06-02 17:41:54.079629 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-06-02 17:41:54.079647 | orchestrator | Monday 02 June 2025 17:37:05 +0000 (0:00:01.232) 0:01:52.114 *********** 2025-06-02 17:41:54.079664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-02 17:41:54.079681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-02 17:41:54.079692 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.079702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-02 17:41:54.079719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-02 17:41:54.079736 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.079760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-02 17:41:54.079776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-02 17:41:54.079787 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.079797 | orchestrator | 2025-06-02 17:41:54.079807 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-06-02 17:41:54.079816 | orchestrator | Monday 02 June 2025 17:37:06 +0000 (0:00:00.924) 0:01:53.038 *********** 2025-06-02 17:41:54.079826 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:41:54.079835 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:41:54.079845 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:41:54.079854 | orchestrator | 2025-06-02 17:41:54.079864 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-06-02 17:41:54.079873 | orchestrator | Monday 02 June 2025 17:37:08 +0000 (0:00:01.262) 0:01:54.301 *********** 2025-06-02 17:41:54.079883 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:41:54.079892 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:41:54.079902 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:41:54.079911 | orchestrator | 2025-06-02 17:41:54.079921 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-06-02 17:41:54.079930 | orchestrator | Monday 02 June 2025 17:37:10 +0000 (0:00:02.517) 0:01:56.818 *********** 2025-06-02 17:41:54.079940 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.079949 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.079959 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.079974 | orchestrator | 2025-06-02 17:41:54.079984 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-06-02 17:41:54.079994 | orchestrator | Monday 02 June 2025 17:37:11 +0000 (0:00:00.829) 0:01:57.648 *********** 2025-06-02 17:41:54.080003 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.080013 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.080022 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.080032 | orchestrator | 2025-06-02 17:41:54.080041 | orchestrator | TASK [include_role : designate] ************************************************ 2025-06-02 17:41:54.080051 | orchestrator | Monday 02 June 2025 17:37:11 +0000 (0:00:00.398) 0:01:58.046 *********** 2025-06-02 17:41:54.080060 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:41:54.080070 | orchestrator | 2025-06-02 17:41:54.080079 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-06-02 17:41:54.080089 | orchestrator | Monday 02 June 2025 17:37:12 +0000 (0:00:00.839) 0:01:58.886 *********** 2025-06-02 17:41:54.080104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 17:41:54.080115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 17:41:54.080126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.080155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.080167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.080183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.080193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.080208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 17:41:54.080218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 17:41:54.080262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.080282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.080306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.080317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.080331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.080342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 17:41:54.080358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 17:41:54.080369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.080386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.080396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.080412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.080423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.080433 | orchestrator | 2025-06-02 17:41:54.080443 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-06-02 17:41:54.080453 | orchestrator | Monday 02 June 2025 17:37:17 +0000 (0:00:04.719) 0:02:03.605 *********** 2025-06-02 17:41:54.080468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 17:41:54.080488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 17:41:54.080498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.080508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.080523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.080533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.080543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.080560 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.080577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 17:41:54.080593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 17:41:54.080609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.080625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.080642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.080658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.080691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.080708 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.080724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 17:41:54.080739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 17:41:54.080787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.080807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.080823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.080860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.080879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.080897 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.080913 | orchestrator | 2025-06-02 17:41:54.080930 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-06-02 17:41:54.080947 | orchestrator | Monday 02 June 2025 17:37:18 +0000 (0:00:00.905) 0:02:04.510 *********** 2025-06-02 17:41:54.080964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-02 17:41:54.080981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-02 17:41:54.080999 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.081015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-02 17:41:54.081032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-02 17:41:54.081049 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.081066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-02 17:41:54.081082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-02 17:41:54.081099 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.081115 | orchestrator | 2025-06-02 17:41:54.081138 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-06-02 17:41:54.081155 | orchestrator | Monday 02 June 2025 17:37:19 +0000 (0:00:01.104) 0:02:05.615 *********** 2025-06-02 17:41:54.081171 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:41:54.081187 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:41:54.081203 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:41:54.081218 | orchestrator | 2025-06-02 17:41:54.081234 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-06-02 17:41:54.081329 | orchestrator | Monday 02 June 2025 17:37:21 +0000 (0:00:01.818) 0:02:07.434 *********** 2025-06-02 17:41:54.081361 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:41:54.081377 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:41:54.081394 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:41:54.081410 | orchestrator | 2025-06-02 17:41:54.081425 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-06-02 17:41:54.081441 | orchestrator | Monday 02 June 2025 17:37:23 +0000 (0:00:01.956) 0:02:09.390 *********** 2025-06-02 17:41:54.081458 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.081473 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.081489 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.081505 | orchestrator | 2025-06-02 17:41:54.081521 | orchestrator | TASK [include_role : glance] *************************************************** 2025-06-02 17:41:54.081536 | orchestrator | Monday 02 June 2025 17:37:23 +0000 (0:00:00.311) 0:02:09.701 *********** 2025-06-02 17:41:54.081552 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:41:54.081568 | orchestrator | 2025-06-02 17:41:54.081583 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-06-02 17:41:54.081598 | orchestrator | Monday 02 June 2025 17:37:24 +0000 (0:00:00.845) 0:02:10.547 *********** 2025-06-02 17:41:54.081632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 17:41:54.081658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-02 17:41:54.081692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 17:41:54.081714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-02 17:41:54.081746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 17:41:54.081761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-02 17:41:54.081783 | orchestrator | 2025-06-02 17:41:54.081796 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-06-02 17:41:54.081809 | orchestrator | Monday 02 June 2025 17:37:28 +0000 (0:00:04.329) 0:02:14.877 *********** 2025-06-02 17:41:54.081835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 17:41:54.081850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-02 17:41:54.081864 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.081927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 17:41:54.081954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-02 17:41:54.081969 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.081989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 17:41:54.082071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-02 17:41:54.082114 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.082129 | orchestrator | 2025-06-02 17:41:54.082143 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-06-02 17:41:54.082156 | orchestrator | Monday 02 June 2025 17:37:31 +0000 (0:00:02.973) 0:02:17.850 *********** 2025-06-02 17:41:54.082170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-02 17:41:54.082194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-02 17:41:54.082209 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.082223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-02 17:41:54.082232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-02 17:41:54.082266 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.082279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-02 17:41:54.082295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-02 17:41:54.082305 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.082319 | orchestrator | 2025-06-02 17:41:54.082331 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-06-02 17:41:54.082344 | orchestrator | Monday 02 June 2025 17:37:34 +0000 (0:00:03.186) 0:02:21.037 *********** 2025-06-02 17:41:54.082357 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:41:54.082369 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:41:54.082382 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:41:54.082396 | orchestrator | 2025-06-02 17:41:54.082408 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-06-02 17:41:54.082422 | orchestrator | Monday 02 June 2025 17:37:36 +0000 (0:00:01.599) 0:02:22.636 *********** 2025-06-02 17:41:54.082435 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:41:54.082448 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:41:54.082461 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:41:54.082474 | orchestrator | 2025-06-02 17:41:54.082487 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-06-02 17:41:54.082503 | orchestrator | Monday 02 June 2025 17:37:38 +0000 (0:00:02.049) 0:02:24.685 *********** 2025-06-02 17:41:54.082511 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.082519 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.082526 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.082534 | orchestrator | 2025-06-02 17:41:54.082542 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-06-02 17:41:54.082549 | orchestrator | Monday 02 June 2025 17:37:38 +0000 (0:00:00.351) 0:02:25.037 *********** 2025-06-02 17:41:54.082557 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:41:54.082565 | orchestrator | 2025-06-02 17:41:54.082573 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-06-02 17:41:54.082580 | orchestrator | Monday 02 June 2025 17:37:39 +0000 (0:00:00.843) 0:02:25.880 *********** 2025-06-02 17:41:54.082589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 17:41:54.082604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 17:41:54.082613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 17:41:54.082621 | orchestrator | 2025-06-02 17:41:54.082629 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-06-02 17:41:54.082636 | orchestrator | Monday 02 June 2025 17:37:44 +0000 (0:00:04.709) 0:02:30.590 *********** 2025-06-02 17:41:54.082661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 17:41:54.082670 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.082683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 17:41:54.082691 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.082699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 17:41:54.082708 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.082715 | orchestrator | 2025-06-02 17:41:54.082723 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-06-02 17:41:54.082731 | orchestrator | Monday 02 June 2025 17:37:44 +0000 (0:00:00.437) 0:02:31.027 *********** 2025-06-02 17:41:54.082739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-02 17:41:54.082751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-02 17:41:54.082760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-02 17:41:54.082768 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.082776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-02 17:41:54.082784 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.082792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-02 17:41:54.082800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-02 17:41:54.082807 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.082815 | orchestrator | 2025-06-02 17:41:54.082823 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-06-02 17:41:54.082831 | orchestrator | Monday 02 June 2025 17:37:45 +0000 (0:00:00.837) 0:02:31.865 *********** 2025-06-02 17:41:54.082839 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:41:54.082846 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:41:54.082854 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:41:54.082862 | orchestrator | 2025-06-02 17:41:54.082870 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-06-02 17:41:54.082877 | orchestrator | Monday 02 June 2025 17:37:47 +0000 (0:00:01.687) 0:02:33.552 *********** 2025-06-02 17:41:54.082891 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:41:54.082899 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:41:54.082907 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:41:54.082914 | orchestrator | 2025-06-02 17:41:54.082928 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-06-02 17:41:54.082936 | orchestrator | Monday 02 June 2025 17:37:49 +0000 (0:00:02.344) 0:02:35.897 *********** 2025-06-02 17:41:54.082944 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.082952 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.082959 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.082967 | orchestrator | 2025-06-02 17:41:54.082975 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-06-02 17:41:54.082983 | orchestrator | Monday 02 June 2025 17:37:49 +0000 (0:00:00.306) 0:02:36.203 *********** 2025-06-02 17:41:54.082990 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:41:54.082998 | orchestrator | 2025-06-02 17:41:54.083006 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-06-02 17:41:54.083014 | orchestrator | Monday 02 June 2025 17:37:50 +0000 (0:00:00.898) 0:02:37.102 *********** 2025-06-02 17:41:54.083044 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 17:41:54.083062 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 17:41:54.083082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 17:41:54.083091 | orchestrator | 2025-06-02 17:41:54.083099 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-06-02 17:41:54.083107 | orchestrator | Monday 02 June 2025 17:37:56 +0000 (0:00:05.502) 0:02:42.604 *********** 2025-06-02 17:41:54.083122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 17:41:54.083155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 17:41:54.083170 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.083177 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.083192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 17:41:54.083201 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.083209 | orchestrator | 2025-06-02 17:41:54.083217 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-06-02 17:41:54.083225 | orchestrator | Monday 02 June 2025 17:37:58 +0000 (0:00:01.791) 0:02:44.397 *********** 2025-06-02 17:41:54.083234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-02 17:41:54.083265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-02 17:41:54.083280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-02 17:41:54.083290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-02 17:41:54.083304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-02 17:41:54.083312 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.083320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-02 17:41:54.083328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-02 17:41:54.083341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-02 17:41:54.083350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-02 17:41:54.083358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-02 17:41:54.083366 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.083374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-02 17:41:54.083382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-02 17:41:54.083390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-02 17:41:54.083398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-02 17:41:54.083405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-02 17:41:54.083413 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.083421 | orchestrator | 2025-06-02 17:41:54.083429 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-06-02 17:41:54.083446 | orchestrator | Monday 02 June 2025 17:38:00 +0000 (0:00:01.986) 0:02:46.384 *********** 2025-06-02 17:41:54.083454 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:41:54.083462 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:41:54.083470 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:41:54.083477 | orchestrator | 2025-06-02 17:41:54.083485 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-06-02 17:41:54.083493 | orchestrator | Monday 02 June 2025 17:38:02 +0000 (0:00:02.094) 0:02:48.479 *********** 2025-06-02 17:41:54.083501 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:41:54.083508 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:41:54.083516 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:41:54.083524 | orchestrator | 2025-06-02 17:41:54.083532 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-06-02 17:41:54.083540 | orchestrator | Monday 02 June 2025 17:38:04 +0000 (0:00:02.737) 0:02:51.216 *********** 2025-06-02 17:41:54.083547 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.083555 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.083563 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.083571 | orchestrator | 2025-06-02 17:41:54.083578 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-06-02 17:41:54.083586 | orchestrator | Monday 02 June 2025 17:38:05 +0000 (0:00:00.356) 0:02:51.573 *********** 2025-06-02 17:41:54.083594 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.083602 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.083610 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.083618 | orchestrator | 2025-06-02 17:41:54.083625 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-06-02 17:41:54.083633 | orchestrator | Monday 02 June 2025 17:38:05 +0000 (0:00:00.329) 0:02:51.902 *********** 2025-06-02 17:41:54.083641 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:41:54.083649 | orchestrator | 2025-06-02 17:41:54.083656 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-06-02 17:41:54.083664 | orchestrator | Monday 02 June 2025 17:38:06 +0000 (0:00:01.182) 0:02:53.085 *********** 2025-06-02 17:41:54.083678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 17:41:54.083688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 17:41:54.083698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 17:41:54.083716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 17:41:54.083726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 17:41:54.083739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 17:41:54.083748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 17:41:54.083756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 17:41:54.083777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 17:41:54.083785 | orchestrator | 2025-06-02 17:41:54.083794 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-06-02 17:41:54.083801 | orchestrator | Monday 02 June 2025 17:38:12 +0000 (0:00:06.047) 0:02:59.132 *********** 2025-06-02 17:41:54.083810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 17:41:54.083823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 17:41:54.083832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 17:41:54.083840 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.083848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 17:41:54.083865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 17:41:54.083874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 17:41:54.083882 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.083895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 17:41:54.083905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 17:41:54.083913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 17:41:54.083925 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.083933 | orchestrator | 2025-06-02 17:41:54.083941 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-06-02 17:41:54.083949 | orchestrator | Monday 02 June 2025 17:38:13 +0000 (0:00:00.588) 0:02:59.721 *********** 2025-06-02 17:41:54.083958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-02 17:41:54.083967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-02 17:41:54.083975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-02 17:41:54.083988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-02 17:41:54.083997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-02 17:41:54.084006 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.084014 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.084022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-02 17:41:54.084030 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.084038 | orchestrator | 2025-06-02 17:41:54.084046 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-06-02 17:41:54.084054 | orchestrator | Monday 02 June 2025 17:38:14 +0000 (0:00:00.875) 0:03:00.596 *********** 2025-06-02 17:41:54.084061 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:41:54.084069 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:41:54.084077 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:41:54.084085 | orchestrator | 2025-06-02 17:41:54.084092 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-06-02 17:41:54.084100 | orchestrator | Monday 02 June 2025 17:38:15 +0000 (0:00:01.286) 0:03:01.883 *********** 2025-06-02 17:41:54.084108 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:41:54.084115 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:41:54.084123 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:41:54.084131 | orchestrator | 2025-06-02 17:41:54.084139 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-06-02 17:41:54.084151 | orchestrator | Monday 02 June 2025 17:38:17 +0000 (0:00:02.021) 0:03:03.904 *********** 2025-06-02 17:41:54.084160 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.084167 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.084181 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.084189 | orchestrator | 2025-06-02 17:41:54.084197 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-06-02 17:41:54.084205 | orchestrator | Monday 02 June 2025 17:38:17 +0000 (0:00:00.306) 0:03:04.211 *********** 2025-06-02 17:41:54.084213 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:41:54.084220 | orchestrator | 2025-06-02 17:41:54.084228 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-06-02 17:41:54.084289 | orchestrator | Monday 02 June 2025 17:38:19 +0000 (0:00:01.756) 0:03:05.967 *********** 2025-06-02 17:41:54.084301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 17:41:54.084310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.084326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 17:41:54.084333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.084351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 17:41:54.084358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.084365 | orchestrator | 2025-06-02 17:41:54.084372 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-06-02 17:41:54.084379 | orchestrator | Monday 02 June 2025 17:38:23 +0000 (0:00:03.376) 0:03:09.344 *********** 2025-06-02 17:41:54.084390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 17:41:54.084398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.084405 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.084416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 17:41:54.084428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.084435 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.084442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 17:41:54.084452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.084459 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.084466 | orchestrator | 2025-06-02 17:41:54.084472 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-06-02 17:41:54.084479 | orchestrator | Monday 02 June 2025 17:38:23 +0000 (0:00:00.672) 0:03:10.017 *********** 2025-06-02 17:41:54.084487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-02 17:41:54.084493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-02 17:41:54.084500 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.084511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-02 17:41:54.084518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-02 17:41:54.084525 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.084532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-02 17:41:54.084538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-02 17:41:54.084549 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.084556 | orchestrator | 2025-06-02 17:41:54.084563 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-06-02 17:41:54.084569 | orchestrator | Monday 02 June 2025 17:38:25 +0000 (0:00:01.387) 0:03:11.404 *********** 2025-06-02 17:41:54.084576 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:41:54.084582 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:41:54.084589 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:41:54.084596 | orchestrator | 2025-06-02 17:41:54.084604 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-06-02 17:41:54.084615 | orchestrator | Monday 02 June 2025 17:38:26 +0000 (0:00:01.328) 0:03:12.732 *********** 2025-06-02 17:41:54.084626 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:41:54.084637 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:41:54.084648 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:41:54.084659 | orchestrator | 2025-06-02 17:41:54.084669 | orchestrator | TASK [include_role : manila] *************************************************** 2025-06-02 17:41:54.084677 | orchestrator | Monday 02 June 2025 17:38:28 +0000 (0:00:02.079) 0:03:14.812 *********** 2025-06-02 17:41:54.084683 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:41:54.084690 | orchestrator | 2025-06-02 17:41:54.084697 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-06-02 17:41:54.084703 | orchestrator | Monday 02 June 2025 17:38:29 +0000 (0:00:01.041) 0:03:15.853 *********** 2025-06-02 17:41:54.084710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-02 17:41:54.084718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.084734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.084742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.084754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-02 17:41:54.084762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.084769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-02 17:41:54.084776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.084787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.084795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.084806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.084830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.084837 | orchestrator | 2025-06-02 17:41:54.084844 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-06-02 17:41:54.084851 | orchestrator | Monday 02 June 2025 17:38:33 +0000 (0:00:03.662) 0:03:19.516 *********** 2025-06-02 17:41:54.084858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-02 17:41:54.084873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.084881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.084888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.084894 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.085277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-02 17:41:54.085301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.085309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.085332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.085340 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.085347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-02 17:41:54.085360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.085368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.085375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.085381 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.085388 | orchestrator | 2025-06-02 17:41:54.085395 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-06-02 17:41:54.085402 | orchestrator | Monday 02 June 2025 17:38:34 +0000 (0:00:00.724) 0:03:20.240 *********** 2025-06-02 17:41:54.085414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-02 17:41:54.085421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-02 17:41:54.085428 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.085435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-02 17:41:54.085442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-02 17:41:54.085449 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.085463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-02 17:41:54.085470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-02 17:41:54.085476 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.085483 | orchestrator | 2025-06-02 17:41:54.085490 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-06-02 17:41:54.085497 | orchestrator | Monday 02 June 2025 17:38:34 +0000 (0:00:00.872) 0:03:21.112 *********** 2025-06-02 17:41:54.085504 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:41:54.085510 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:41:54.085517 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:41:54.085524 | orchestrator | 2025-06-02 17:41:54.085531 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-06-02 17:41:54.085537 | orchestrator | Monday 02 June 2025 17:38:36 +0000 (0:00:01.661) 0:03:22.774 *********** 2025-06-02 17:41:54.085544 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:41:54.085550 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:41:54.085557 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:41:54.085564 | orchestrator | 2025-06-02 17:41:54.085570 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-06-02 17:41:54.085577 | orchestrator | Monday 02 June 2025 17:38:38 +0000 (0:00:02.129) 0:03:24.904 *********** 2025-06-02 17:41:54.085584 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:41:54.085590 | orchestrator | 2025-06-02 17:41:54.085597 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-06-02 17:41:54.085603 | orchestrator | Monday 02 June 2025 17:38:39 +0000 (0:00:01.136) 0:03:26.040 *********** 2025-06-02 17:41:54.085610 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 17:41:54.085617 | orchestrator | 2025-06-02 17:41:54.085624 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-06-02 17:41:54.085631 | orchestrator | Monday 02 June 2025 17:38:42 +0000 (0:00:02.943) 0:03:28.984 *********** 2025-06-02 17:41:54.085644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 17:41:54.085657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-02 17:41:54.085664 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.085679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 17:41:54.085687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-02 17:41:54.085698 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.085708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 17:41:54.085716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-02 17:41:54.085723 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.085730 | orchestrator | 2025-06-02 17:41:54.085737 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-06-02 17:41:54.085744 | orchestrator | Monday 02 June 2025 17:38:45 +0000 (0:00:03.225) 0:03:32.209 *********** 2025-06-02 17:41:54.085756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 17:41:54.085768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-02 17:41:54.085775 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.085786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 17:41:54.085798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-02 17:41:54.085809 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.085817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 17:41:54.085828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-02 17:41:54.085835 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.085842 | orchestrator | 2025-06-02 17:41:54.085849 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-06-02 17:41:54.085855 | orchestrator | Monday 02 June 2025 17:38:48 +0000 (0:00:02.662) 0:03:34.872 *********** 2025-06-02 17:41:54.085864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-02 17:41:54.085876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-02 17:41:54.085888 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.085896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-02 17:41:54.085905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-02 17:41:54.085913 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.085921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-02 17:41:54.085932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-02 17:41:54.085941 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.085948 | orchestrator | 2025-06-02 17:41:54.085956 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-06-02 17:41:54.085964 | orchestrator | Monday 02 June 2025 17:38:51 +0000 (0:00:02.931) 0:03:37.803 *********** 2025-06-02 17:41:54.085972 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:41:54.085980 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:41:54.085987 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:41:54.085995 | orchestrator | 2025-06-02 17:41:54.086003 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-06-02 17:41:54.086011 | orchestrator | Monday 02 June 2025 17:38:53 +0000 (0:00:01.976) 0:03:39.780 *********** 2025-06-02 17:41:54.086054 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.086065 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.086073 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.086081 | orchestrator | 2025-06-02 17:41:54.086089 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-06-02 17:41:54.086097 | orchestrator | Monday 02 June 2025 17:38:55 +0000 (0:00:01.492) 0:03:41.272 *********** 2025-06-02 17:41:54.086105 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.086117 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.086125 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.086132 | orchestrator | 2025-06-02 17:41:54.086141 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-06-02 17:41:54.086148 | orchestrator | Monday 02 June 2025 17:38:55 +0000 (0:00:00.353) 0:03:41.626 *********** 2025-06-02 17:41:54.086156 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:41:54.086164 | orchestrator | 2025-06-02 17:41:54.086172 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-06-02 17:41:54.086180 | orchestrator | Monday 02 June 2025 17:38:56 +0000 (0:00:01.177) 0:03:42.803 *********** 2025-06-02 17:41:54.086195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-02 17:41:54.086204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-02 17:41:54.086212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-02 17:41:54.086219 | orchestrator | 2025-06-02 17:41:54.086225 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-06-02 17:41:54.086232 | orchestrator | Monday 02 June 2025 17:38:58 +0000 (0:00:01.713) 0:03:44.516 *********** 2025-06-02 17:41:54.086303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-02 17:41:54.086325 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.086333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-02 17:41:54.086340 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.086353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-02 17:41:54.086360 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.086366 | orchestrator | 2025-06-02 17:41:54.086373 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-06-02 17:41:54.086380 | orchestrator | Monday 02 June 2025 17:38:58 +0000 (0:00:00.392) 0:03:44.908 *********** 2025-06-02 17:41:54.086387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-02 17:41:54.086394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-02 17:41:54.086400 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.086407 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.086413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-02 17:41:54.086419 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.086425 | orchestrator | 2025-06-02 17:41:54.086431 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-06-02 17:41:54.086437 | orchestrator | Monday 02 June 2025 17:38:59 +0000 (0:00:00.606) 0:03:45.515 *********** 2025-06-02 17:41:54.086444 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.086450 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.086456 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.086462 | orchestrator | 2025-06-02 17:41:54.086468 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-06-02 17:41:54.086474 | orchestrator | Monday 02 June 2025 17:39:00 +0000 (0:00:00.780) 0:03:46.295 *********** 2025-06-02 17:41:54.086480 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.086486 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.086497 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.086503 | orchestrator | 2025-06-02 17:41:54.086509 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-06-02 17:41:54.086515 | orchestrator | Monday 02 June 2025 17:39:01 +0000 (0:00:01.325) 0:03:47.621 *********** 2025-06-02 17:41:54.086521 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.086527 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.086533 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.086539 | orchestrator | 2025-06-02 17:41:54.086552 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-06-02 17:41:54.086558 | orchestrator | Monday 02 June 2025 17:39:01 +0000 (0:00:00.326) 0:03:47.947 *********** 2025-06-02 17:41:54.086564 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:41:54.086571 | orchestrator | 2025-06-02 17:41:54.086577 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-06-02 17:41:54.086583 | orchestrator | Monday 02 June 2025 17:39:03 +0000 (0:00:01.429) 0:03:49.377 *********** 2025-06-02 17:41:54.086589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 17:41:54.086601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.086608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.086616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.086630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-02 17:41:54.086637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.086644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 17:41:54.086663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 17:41:54.086670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 17:41:54.086676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.086691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.086699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.086708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:41:54.086715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.086721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.086733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-02 17:41:54.086743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-02 17:41:54.086750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.086756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 17:41:54.086766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 17:41:54.086773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.086780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 17:41:54.086793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-02 17:41:54.086800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.086807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:41:54.086827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.086834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:41:54.086846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.086855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-02 17:41:54.086862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 17:41:54.086868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.086879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-02 17:41:54.086886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:41:54.086897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.086907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 17:41:54.086914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.086931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.086938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.086968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-02 17:41:54.086979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.086986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 17:41:54.086992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 17:41:54.087003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.087009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:41:54.087022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.087029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-02 17:41:54.087039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 17:41:54.087045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.087056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-02 17:41:54.087063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:41:54.087074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.087080 | orchestrator | 2025-06-02 17:41:54.087087 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-06-02 17:41:54.087093 | orchestrator | Monday 02 June 2025 17:39:07 +0000 (0:00:04.370) 0:03:53.747 *********** 2025-06-02 17:41:54.087113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 17:41:54.087121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.087131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.087141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.087148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-02 17:41:54.087154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.087164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 17:41:54.087171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 17:41:54.087177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.087192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:41:54.087199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.087206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-02 17:41:54.087212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 17:41:54.087222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.087229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-02 17:41:54.087266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:41:54.087273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 17:41:54.087280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.087290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.087297 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.087303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.087318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 17:41:54.087325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.087332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.087341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.087348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-02 17:41:54.087362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.087369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.087375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-02 17:41:54.087382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 17:41:54.087392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.087399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 17:41:54.087409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 17:41:54.087420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.087427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 17:41:54.087433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:41:54.087440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.087451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.087462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:41:54.087472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-02 17:41:54.087479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.087485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 17:41:54.087492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-02 17:41:54.087501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.087508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 17:41:54.087522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-02 17:41:54.087529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.087535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:41:54.087542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-02 17:41:54.087629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.087655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:41:54.087668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ov2025-06-02 17:41:54 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:41:54.087675 | orchestrator | 2025-06-02 17:41:54 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:41:54.087682 | orchestrator | 2025-06-02 17:41:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:41:54.087688 | orchestrator | n-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.087694 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.087701 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.087707 | orchestrator | 2025-06-02 17:41:54.087713 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-06-02 17:41:54.087719 | orchestrator | Monday 02 June 2025 17:39:09 +0000 (0:00:01.714) 0:03:55.462 *********** 2025-06-02 17:41:54.087726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-02 17:41:54.087732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-02 17:41:54.087739 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.087745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-02 17:41:54.087751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-02 17:41:54.087757 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.087763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-02 17:41:54.087769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-02 17:41:54.087799 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.087805 | orchestrator | 2025-06-02 17:41:54.087815 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-06-02 17:41:54.087821 | orchestrator | Monday 02 June 2025 17:39:11 +0000 (0:00:02.343) 0:03:57.805 *********** 2025-06-02 17:41:54.087827 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:41:54.087834 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:41:54.087840 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:41:54.087846 | orchestrator | 2025-06-02 17:41:54.087852 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-06-02 17:41:54.087858 | orchestrator | Monday 02 June 2025 17:39:12 +0000 (0:00:01.326) 0:03:59.132 *********** 2025-06-02 17:41:54.087864 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:41:54.087870 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:41:54.087877 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:41:54.087883 | orchestrator | 2025-06-02 17:41:54.087889 | orchestrator | TASK [include_role : placement] ************************************************ 2025-06-02 17:41:54.087895 | orchestrator | Monday 02 June 2025 17:39:15 +0000 (0:00:02.251) 0:04:01.384 *********** 2025-06-02 17:41:54.087901 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:41:54.087907 | orchestrator | 2025-06-02 17:41:54.087913 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-06-02 17:41:54.087919 | orchestrator | Monday 02 June 2025 17:39:16 +0000 (0:00:01.228) 0:04:02.612 *********** 2025-06-02 17:41:54.087930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 17:41:54.087938 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 17:41:54.087944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 17:41:54.087956 | orchestrator | 2025-06-02 17:41:54.087963 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-06-02 17:41:54.087969 | orchestrator | Monday 02 June 2025 17:39:20 +0000 (0:00:03.677) 0:04:06.290 *********** 2025-06-02 17:41:54.087978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 17:41:54.087985 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.087995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 17:41:54.088002 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.088008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 17:41:54.088014 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.088021 | orchestrator | 2025-06-02 17:41:54.088027 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-06-02 17:41:54.088033 | orchestrator | Monday 02 June 2025 17:39:20 +0000 (0:00:00.540) 0:04:06.830 *********** 2025-06-02 17:41:54.088039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-02 17:41:54.088059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-02 17:41:54.088067 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.088073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-02 17:41:54.088079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-02 17:41:54.088086 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.088092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-02 17:41:54.088101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-02 17:41:54.088108 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.088114 | orchestrator | 2025-06-02 17:41:54.088120 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-06-02 17:41:54.088126 | orchestrator | Monday 02 June 2025 17:39:21 +0000 (0:00:00.798) 0:04:07.628 *********** 2025-06-02 17:41:54.088133 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:41:54.088139 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:41:54.088145 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:41:54.088151 | orchestrator | 2025-06-02 17:41:54.088157 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-06-02 17:41:54.088163 | orchestrator | Monday 02 June 2025 17:39:23 +0000 (0:00:01.701) 0:04:09.330 *********** 2025-06-02 17:41:54.088169 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:41:54.088175 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:41:54.088182 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:41:54.088188 | orchestrator | 2025-06-02 17:41:54.088194 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-06-02 17:41:54.088200 | orchestrator | Monday 02 June 2025 17:39:25 +0000 (0:00:02.091) 0:04:11.422 *********** 2025-06-02 17:41:54.088206 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:41:54.088212 | orchestrator | 2025-06-02 17:41:54.088218 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-06-02 17:41:54.088224 | orchestrator | Monday 02 June 2025 17:39:26 +0000 (0:00:01.370) 0:04:12.793 *********** 2025-06-02 17:41:54.088248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 17:41:54.088267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.088277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.088293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 17:41:54.088305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.088421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 17:41:54.088438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.088445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.088456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.088462 | orchestrator | 2025-06-02 17:41:54.088469 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-06-02 17:41:54.088475 | orchestrator | Monday 02 June 2025 17:39:30 +0000 (0:00:04.354) 0:04:17.148 *********** 2025-06-02 17:41:54.088482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 17:41:54.088498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.088504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.088511 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.088521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 17:41:54.088528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.088534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.088541 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.088551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 17:41:54.088565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.088571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.088578 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.088584 | orchestrator | 2025-06-02 17:41:54.088590 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-06-02 17:41:54.088596 | orchestrator | Monday 02 June 2025 17:39:31 +0000 (0:00:01.021) 0:04:18.169 *********** 2025-06-02 17:41:54.088606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-02 17:41:54.088613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-02 17:41:54.088620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-02 17:41:54.088626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-02 17:41:54.088633 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.088639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-02 17:41:54.088649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-02 17:41:54.088656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-02 17:41:54.088662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-02 17:41:54.088668 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.088677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-02 17:41:54.088684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-02 17:41:54.088690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-02 17:41:54.088697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-02 17:41:54.088703 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.088709 | orchestrator | 2025-06-02 17:41:54.088715 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-06-02 17:41:54.088721 | orchestrator | Monday 02 June 2025 17:39:32 +0000 (0:00:00.897) 0:04:19.066 *********** 2025-06-02 17:41:54.088728 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:41:54.088734 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:41:54.088740 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:41:54.088746 | orchestrator | 2025-06-02 17:41:54.088752 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-06-02 17:41:54.088758 | orchestrator | Monday 02 June 2025 17:39:34 +0000 (0:00:01.667) 0:04:20.733 *********** 2025-06-02 17:41:54.088764 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:41:54.088770 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:41:54.088776 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:41:54.088782 | orchestrator | 2025-06-02 17:41:54.088788 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-06-02 17:41:54.088795 | orchestrator | Monday 02 June 2025 17:39:36 +0000 (0:00:02.075) 0:04:22.809 *********** 2025-06-02 17:41:54.088801 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:41:54.088807 | orchestrator | 2025-06-02 17:41:54.088813 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-06-02 17:41:54.088819 | orchestrator | Monday 02 June 2025 17:39:38 +0000 (0:00:01.555) 0:04:24.365 *********** 2025-06-02 17:41:54.088825 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-06-02 17:41:54.088832 | orchestrator | 2025-06-02 17:41:54.088838 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-06-02 17:41:54.088844 | orchestrator | Monday 02 June 2025 17:39:39 +0000 (0:00:01.079) 0:04:25.445 *********** 2025-06-02 17:41:54.088866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-02 17:41:54.088879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-02 17:41:54.088886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-02 17:41:54.088892 | orchestrator | 2025-06-02 17:41:54.088899 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-06-02 17:41:54.088905 | orchestrator | Monday 02 June 2025 17:39:43 +0000 (0:00:04.034) 0:04:29.479 *********** 2025-06-02 17:41:54.088914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 17:41:54.088921 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.088927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 17:41:54.088934 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.088940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 17:41:54.088947 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.088953 | orchestrator | 2025-06-02 17:41:54.088959 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-06-02 17:41:54.088965 | orchestrator | Monday 02 June 2025 17:39:44 +0000 (0:00:01.279) 0:04:30.758 *********** 2025-06-02 17:41:54.088972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-02 17:41:54.088979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-02 17:41:54.088989 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.088999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-02 17:41:54.089005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-02 17:41:54.089012 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.089018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-02 17:41:54.089025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-02 17:41:54.089032 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.089039 | orchestrator | 2025-06-02 17:41:54.089046 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-02 17:41:54.089054 | orchestrator | Monday 02 June 2025 17:39:46 +0000 (0:00:02.026) 0:04:32.785 *********** 2025-06-02 17:41:54.089061 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:41:54.089068 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:41:54.089075 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:41:54.089082 | orchestrator | 2025-06-02 17:41:54.089089 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-02 17:41:54.089097 | orchestrator | Monday 02 June 2025 17:39:48 +0000 (0:00:02.387) 0:04:35.173 *********** 2025-06-02 17:41:54.089104 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:41:54.089112 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:41:54.089119 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:41:54.089127 | orchestrator | 2025-06-02 17:41:54.089134 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-06-02 17:41:54.089141 | orchestrator | Monday 02 June 2025 17:39:52 +0000 (0:00:03.250) 0:04:38.423 *********** 2025-06-02 17:41:54.089148 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-06-02 17:41:54.089156 | orchestrator | 2025-06-02 17:41:54.089163 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-06-02 17:41:54.089173 | orchestrator | Monday 02 June 2025 17:39:53 +0000 (0:00:00.842) 0:04:39.265 *********** 2025-06-02 17:41:54.089182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 17:41:54.089190 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.089197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 17:41:54.089208 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.089214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 17:41:54.089221 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.089227 | orchestrator | 2025-06-02 17:41:54.089233 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-06-02 17:41:54.089291 | orchestrator | Monday 02 June 2025 17:39:54 +0000 (0:00:01.290) 0:04:40.556 *********** 2025-06-02 17:41:54.089302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 17:41:54.089308 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.089315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 17:41:54.089321 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.089327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 17:41:54.089334 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.089340 | orchestrator | 2025-06-02 17:41:54.089346 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-06-02 17:41:54.089352 | orchestrator | Monday 02 June 2025 17:39:55 +0000 (0:00:01.631) 0:04:42.187 *********** 2025-06-02 17:41:54.089358 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.089364 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.089370 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.089377 | orchestrator | 2025-06-02 17:41:54.089383 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-02 17:41:54.089393 | orchestrator | Monday 02 June 2025 17:39:57 +0000 (0:00:01.198) 0:04:43.386 *********** 2025-06-02 17:41:54.089399 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:41:54.089406 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:41:54.089412 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:41:54.089418 | orchestrator | 2025-06-02 17:41:54.089424 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-02 17:41:54.089435 | orchestrator | Monday 02 June 2025 17:39:59 +0000 (0:00:02.409) 0:04:45.795 *********** 2025-06-02 17:41:54.089441 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:41:54.089448 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:41:54.089454 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:41:54.089460 | orchestrator | 2025-06-02 17:41:54.089466 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-06-02 17:41:54.089473 | orchestrator | Monday 02 June 2025 17:40:02 +0000 (0:00:03.183) 0:04:48.978 *********** 2025-06-02 17:41:54.089479 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-06-02 17:41:54.089485 | orchestrator | 2025-06-02 17:41:54.089491 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-06-02 17:41:54.089498 | orchestrator | Monday 02 June 2025 17:40:03 +0000 (0:00:01.113) 0:04:50.092 *********** 2025-06-02 17:41:54.089504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-02 17:41:54.089510 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.089517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-02 17:41:54.089523 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.089533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-02 17:41:54.089540 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.089546 | orchestrator | 2025-06-02 17:41:54.089552 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-06-02 17:41:54.089558 | orchestrator | Monday 02 June 2025 17:40:04 +0000 (0:00:01.044) 0:04:51.137 *********** 2025-06-02 17:41:54.089564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-02 17:41:54.089571 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.089577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-02 17:41:54.089587 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.089598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-02 17:41:54.089604 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.089611 | orchestrator | 2025-06-02 17:41:54.089617 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-06-02 17:41:54.089623 | orchestrator | Monday 02 June 2025 17:40:06 +0000 (0:00:01.250) 0:04:52.387 *********** 2025-06-02 17:41:54.089629 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.089635 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.089642 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.089648 | orchestrator | 2025-06-02 17:41:54.089654 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-02 17:41:54.089660 | orchestrator | Monday 02 June 2025 17:40:08 +0000 (0:00:01.844) 0:04:54.231 *********** 2025-06-02 17:41:54.089666 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:41:54.089672 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:41:54.089678 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:41:54.089685 | orchestrator | 2025-06-02 17:41:54.089691 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-02 17:41:54.089697 | orchestrator | Monday 02 June 2025 17:40:10 +0000 (0:00:02.408) 0:04:56.639 *********** 2025-06-02 17:41:54.089703 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:41:54.089709 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:41:54.089715 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:41:54.089721 | orchestrator | 2025-06-02 17:41:54.089728 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-06-02 17:41:54.089734 | orchestrator | Monday 02 June 2025 17:40:13 +0000 (0:00:03.193) 0:04:59.833 *********** 2025-06-02 17:41:54.089740 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:41:54.089750 | orchestrator | 2025-06-02 17:41:54.089760 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-06-02 17:41:54.089771 | orchestrator | Monday 02 June 2025 17:40:14 +0000 (0:00:01.337) 0:05:01.170 *********** 2025-06-02 17:41:54.089786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 17:41:54.089798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 17:41:54.089816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 17:41:54.089826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 17:41:54.089832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.089838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 17:41:54.089847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 17:41:54.089853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 17:41:54.089874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 17:41:54.089884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 17:41:54.089890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.089896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 17:41:54.089904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 17:41:54.089910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 17:41:54.089921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.089926 | orchestrator | 2025-06-02 17:41:54.089932 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-06-02 17:41:54.089938 | orchestrator | Monday 02 June 2025 17:40:18 +0000 (0:00:03.614) 0:05:04.785 *********** 2025-06-02 17:41:54.089946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 17:41:54.089952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 17:41:54.089958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 17:41:54.089966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 17:41:54.089976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.089982 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.089988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 17:41:54.089996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 17:41:54.090002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 17:41:54.090008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 17:41:54.090014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.090056 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.090064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 17:41:54.090071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 17:41:54.090080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 17:41:54.090086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 17:41:54.090092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:41:54.090098 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.090104 | orchestrator | 2025-06-02 17:41:54.090109 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-06-02 17:41:54.090115 | orchestrator | Monday 02 June 2025 17:40:19 +0000 (0:00:00.743) 0:05:05.529 *********** 2025-06-02 17:41:54.090120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-02 17:41:54.090130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-02 17:41:54.090136 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.090144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-02 17:41:54.090150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-02 17:41:54.090156 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.090161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-02 17:41:54.090167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-02 17:41:54.090172 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.090177 | orchestrator | 2025-06-02 17:41:54.090183 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-06-02 17:41:54.090188 | orchestrator | Monday 02 June 2025 17:40:20 +0000 (0:00:00.887) 0:05:06.417 *********** 2025-06-02 17:41:54.090194 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:41:54.090199 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:41:54.090204 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:41:54.090210 | orchestrator | 2025-06-02 17:41:54.090215 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-06-02 17:41:54.090220 | orchestrator | Monday 02 June 2025 17:40:21 +0000 (0:00:01.687) 0:05:08.104 *********** 2025-06-02 17:41:54.090226 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:41:54.090231 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:41:54.090252 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:41:54.090262 | orchestrator | 2025-06-02 17:41:54.090268 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-06-02 17:41:54.090273 | orchestrator | Monday 02 June 2025 17:40:24 +0000 (0:00:02.204) 0:05:10.309 *********** 2025-06-02 17:41:54.090278 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:41:54.090284 | orchestrator | 2025-06-02 17:41:54.090289 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-06-02 17:41:54.090295 | orchestrator | Monday 02 June 2025 17:40:25 +0000 (0:00:01.349) 0:05:11.659 *********** 2025-06-02 17:41:54.090310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 17:41:54.090317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 17:41:54.090331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 17:41:54.090337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 17:41:54.090347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 17:41:54.090354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 17:41:54.090373 | orchestrator | 2025-06-02 17:41:54.090379 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-06-02 17:41:54.090385 | orchestrator | Monday 02 June 2025 17:40:31 +0000 (0:00:05.595) 0:05:17.254 *********** 2025-06-02 17:41:54.090394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 17:41:54.090400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 17:41:54.090407 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.090416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 17:41:54.090422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 17:41:54.090432 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.090442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 17:41:54.090448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 17:41:54.090454 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.090459 | orchestrator | 2025-06-02 17:41:54.090464 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-06-02 17:41:54.090470 | orchestrator | Monday 02 June 2025 17:40:32 +0000 (0:00:01.062) 0:05:18.317 *********** 2025-06-02 17:41:54.090475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-02 17:41:54.090484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-02 17:41:54.090490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-02 17:41:54.090501 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.090507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-02 17:41:54.090512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-02 17:41:54.090518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-02 17:41:54.090523 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.090529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-02 17:41:54.090535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-02 17:41:54.090540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-02 17:41:54.090546 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.090551 | orchestrator | 2025-06-02 17:41:54.090557 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-06-02 17:41:54.090562 | orchestrator | Monday 02 June 2025 17:40:32 +0000 (0:00:00.891) 0:05:19.208 *********** 2025-06-02 17:41:54.090568 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.090573 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.090578 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.090584 | orchestrator | 2025-06-02 17:41:54.090593 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-06-02 17:41:54.090599 | orchestrator | Monday 02 June 2025 17:40:33 +0000 (0:00:00.457) 0:05:19.665 *********** 2025-06-02 17:41:54.090604 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.090609 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.090615 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.090620 | orchestrator | 2025-06-02 17:41:54.090626 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-06-02 17:41:54.090631 | orchestrator | Monday 02 June 2025 17:40:34 +0000 (0:00:01.492) 0:05:21.158 *********** 2025-06-02 17:41:54.090636 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:41:54.090642 | orchestrator | 2025-06-02 17:41:54.090647 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-06-02 17:41:54.090653 | orchestrator | Monday 02 June 2025 17:40:36 +0000 (0:00:01.704) 0:05:22.862 *********** 2025-06-02 17:41:54.090659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-02 17:41:54.090668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 17:41:54.090679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:41:54.090684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:41:54.090690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 17:41:54.090700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-02 17:41:54.090706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 17:41:54.090711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:41:54.090724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-02 17:41:54.090730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:41:54.090736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 17:41:54.090742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 17:41:54.090751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:41:54.090757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:41:54.090763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 17:41:54.090776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-02 17:41:54.090783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-02 17:41:54.090788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:41:54.090794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:41:54.090803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 17:41:54.090809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-02 17:41:54.090825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-02 17:41:54.090831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-02 17:41:54.090837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:41:54.090846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-02 17:41:54.090855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:41:54.090861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:41:54.090871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 17:41:54.090876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:41:54.090882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 17:41:54.090887 | orchestrator | 2025-06-02 17:41:54.090893 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-06-02 17:41:54.090898 | orchestrator | Monday 02 June 2025 17:40:40 +0000 (0:00:04.292) 0:05:27.154 *********** 2025-06-02 17:41:54.090907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-02 17:41:54.090913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 17:41:54.090923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:41:54.090932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:41:54.090942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 17:41:54.090948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-02 17:41:54.090957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-02 17:41:54.090964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:41:54.090973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:41:54.090979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 17:41:54.090985 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.090994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-02 17:41:54.091000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 17:41:54.091005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:41:54.091011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:41:54.091022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 17:41:54.091032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-02 17:41:54.091041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-02 17:41:54.091047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 17:41:54.091053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-02 17:41:54.091059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:41:54.091069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:41:54.091074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:41:54.091080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:41:54.091090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 17:41:54.091095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 17:41:54.091101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-02 17:41:54.091113 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.091139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-02 17:41:54.091145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:41:54.091151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:41:54.091160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 17:41:54.091166 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.091172 | orchestrator | 2025-06-02 17:41:54.091177 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-06-02 17:41:54.091183 | orchestrator | Monday 02 June 2025 17:40:42 +0000 (0:00:01.558) 0:05:28.713 *********** 2025-06-02 17:41:54.091189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-02 17:41:54.091194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-02 17:41:54.091200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-02 17:41:54.091206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-02 17:41:54.091211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-02 17:41:54.091222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-02 17:41:54.091228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-02 17:41:54.091252 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.091259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-02 17:41:54.091265 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.091270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-02 17:41:54.091276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-02 17:41:54.091282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-02 17:41:54.091287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-02 17:41:54.091293 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.091298 | orchestrator | 2025-06-02 17:41:54.091304 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-06-02 17:41:54.091309 | orchestrator | Monday 02 June 2025 17:40:43 +0000 (0:00:01.034) 0:05:29.747 *********** 2025-06-02 17:41:54.091314 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.091320 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.091325 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.091331 | orchestrator | 2025-06-02 17:41:54.091336 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-06-02 17:41:54.091342 | orchestrator | Monday 02 June 2025 17:40:43 +0000 (0:00:00.458) 0:05:30.206 *********** 2025-06-02 17:41:54.091351 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.091357 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.091362 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.091368 | orchestrator | 2025-06-02 17:41:54.091373 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-06-02 17:41:54.091379 | orchestrator | Monday 02 June 2025 17:40:45 +0000 (0:00:01.804) 0:05:32.011 *********** 2025-06-02 17:41:54.091384 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:41:54.091390 | orchestrator | 2025-06-02 17:41:54.091395 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-06-02 17:41:54.091400 | orchestrator | Monday 02 June 2025 17:40:47 +0000 (0:00:01.803) 0:05:33.814 *********** 2025-06-02 17:41:54.091411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 17:41:54.091421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 17:41:54.091427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 17:41:54.091433 | orchestrator | 2025-06-02 17:41:54.091439 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-06-02 17:41:54.091444 | orchestrator | Monday 02 June 2025 17:40:50 +0000 (0:00:02.901) 0:05:36.716 *********** 2025-06-02 17:41:54.091454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-02 17:41:54.091464 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.091469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-02 17:41:54.091475 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.091494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-02 17:41:54.091501 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.091506 | orchestrator | 2025-06-02 17:41:54.091512 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-06-02 17:41:54.091517 | orchestrator | Monday 02 June 2025 17:40:50 +0000 (0:00:00.396) 0:05:37.112 *********** 2025-06-02 17:41:54.091523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-02 17:41:54.091528 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.091534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-02 17:41:54.091539 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.091545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-02 17:41:54.091550 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.091556 | orchestrator | 2025-06-02 17:41:54.091561 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-06-02 17:41:54.091566 | orchestrator | Monday 02 June 2025 17:40:51 +0000 (0:00:00.988) 0:05:38.101 *********** 2025-06-02 17:41:54.091572 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.091577 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.091583 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.091588 | orchestrator | 2025-06-02 17:41:54.091593 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-06-02 17:41:54.091603 | orchestrator | Monday 02 June 2025 17:40:52 +0000 (0:00:00.474) 0:05:38.575 *********** 2025-06-02 17:41:54.091608 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.091614 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.091619 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.091624 | orchestrator | 2025-06-02 17:41:54.091630 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-06-02 17:41:54.091669 | orchestrator | Monday 02 June 2025 17:40:53 +0000 (0:00:01.365) 0:05:39.940 *********** 2025-06-02 17:41:54.091676 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:41:54.091681 | orchestrator | 2025-06-02 17:41:54.091687 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-06-02 17:41:54.091692 | orchestrator | Monday 02 June 2025 17:40:55 +0000 (0:00:01.816) 0:05:41.757 *********** 2025-06-02 17:41:54.091698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-02 17:41:54.091704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-02 17:41:54.091714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-02 17:41:54.091721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-02 17:41:54.091735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-02 17:41:54.091741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-02 17:41:54.091747 | orchestrator | 2025-06-02 17:41:54.091752 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-06-02 17:41:54.091758 | orchestrator | Monday 02 June 2025 17:41:01 +0000 (0:00:06.029) 0:05:47.787 *********** 2025-06-02 17:41:54.091768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-02 17:41:54.091774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-02 17:41:54.091784 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.091792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-02 17:41:54.091798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-02 17:41:54.091804 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.091813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-02 17:41:54.091819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-02 17:41:54.091828 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.091834 | orchestrator | 2025-06-02 17:41:54.091839 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-06-02 17:41:54.091845 | orchestrator | Monday 02 June 2025 17:41:02 +0000 (0:00:00.644) 0:05:48.431 *********** 2025-06-02 17:41:54.091850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-02 17:41:54.091858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-02 17:41:54.091864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-02 17:41:54.091870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-02 17:41:54.091875 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.091881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-02 17:41:54.091886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-02 17:41:54.091892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-02 17:41:54.091897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-02 17:41:54.091903 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.091908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-02 17:41:54.091914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-02 17:41:54.091936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-02 17:41:54.091943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-02 17:41:54.091953 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.091959 | orchestrator | 2025-06-02 17:41:54.091964 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-06-02 17:41:54.091969 | orchestrator | Monday 02 June 2025 17:41:03 +0000 (0:00:01.656) 0:05:50.087 *********** 2025-06-02 17:41:54.091975 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:41:54.091980 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:41:54.091986 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:41:54.091991 | orchestrator | 2025-06-02 17:41:54.091997 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-06-02 17:41:54.092002 | orchestrator | Monday 02 June 2025 17:41:05 +0000 (0:00:01.374) 0:05:51.462 *********** 2025-06-02 17:41:54.092007 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:41:54.092013 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:41:54.092018 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:41:54.092024 | orchestrator | 2025-06-02 17:41:54.092029 | orchestrator | TASK [include_role : swift] **************************************************** 2025-06-02 17:41:54.092034 | orchestrator | Monday 02 June 2025 17:41:07 +0000 (0:00:02.202) 0:05:53.664 *********** 2025-06-02 17:41:54.092040 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.092045 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.092051 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.092056 | orchestrator | 2025-06-02 17:41:54.092062 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-06-02 17:41:54.092067 | orchestrator | Monday 02 June 2025 17:41:07 +0000 (0:00:00.334) 0:05:53.999 *********** 2025-06-02 17:41:54.092072 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.092078 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.092083 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.092089 | orchestrator | 2025-06-02 17:41:54.092094 | orchestrator | TASK [include_role : trove] **************************************************** 2025-06-02 17:41:54.092099 | orchestrator | Monday 02 June 2025 17:41:08 +0000 (0:00:00.650) 0:05:54.649 *********** 2025-06-02 17:41:54.092105 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.092110 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.092116 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.092121 | orchestrator | 2025-06-02 17:41:54.092127 | orchestrator | TASK [include_role : venus] **************************************************** 2025-06-02 17:41:54.092132 | orchestrator | Monday 02 June 2025 17:41:08 +0000 (0:00:00.315) 0:05:54.965 *********** 2025-06-02 17:41:54.092141 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.092147 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.092152 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.092158 | orchestrator | 2025-06-02 17:41:54.092163 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-06-02 17:41:54.092168 | orchestrator | Monday 02 June 2025 17:41:09 +0000 (0:00:00.316) 0:05:55.282 *********** 2025-06-02 17:41:54.092174 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.092179 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.092185 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.092190 | orchestrator | 2025-06-02 17:41:54.092195 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-06-02 17:41:54.092201 | orchestrator | Monday 02 June 2025 17:41:09 +0000 (0:00:00.316) 0:05:55.598 *********** 2025-06-02 17:41:54.092206 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.092211 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.092217 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.092222 | orchestrator | 2025-06-02 17:41:54.092228 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-06-02 17:41:54.092233 | orchestrator | Monday 02 June 2025 17:41:10 +0000 (0:00:00.849) 0:05:56.448 *********** 2025-06-02 17:41:54.092279 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:41:54.092285 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:41:54.092305 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:41:54.092311 | orchestrator | 2025-06-02 17:41:54.092316 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-06-02 17:41:54.092322 | orchestrator | Monday 02 June 2025 17:41:10 +0000 (0:00:00.654) 0:05:57.103 *********** 2025-06-02 17:41:54.092327 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:41:54.092333 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:41:54.092338 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:41:54.092344 | orchestrator | 2025-06-02 17:41:54.092349 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-06-02 17:41:54.092355 | orchestrator | Monday 02 June 2025 17:41:11 +0000 (0:00:00.348) 0:05:57.451 *********** 2025-06-02 17:41:54.092360 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:41:54.092365 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:41:54.092371 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:41:54.092376 | orchestrator | 2025-06-02 17:41:54.092381 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-06-02 17:41:54.092387 | orchestrator | Monday 02 June 2025 17:41:12 +0000 (0:00:00.811) 0:05:58.263 *********** 2025-06-02 17:41:54.092392 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:41:54.092398 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:41:54.092403 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:41:54.092408 | orchestrator | 2025-06-02 17:41:54.092414 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-06-02 17:41:54.092419 | orchestrator | Monday 02 June 2025 17:41:13 +0000 (0:00:01.184) 0:05:59.447 *********** 2025-06-02 17:41:54.092425 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:41:54.092430 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:41:54.092435 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:41:54.092441 | orchestrator | 2025-06-02 17:41:54.092446 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-06-02 17:41:54.092452 | orchestrator | Monday 02 June 2025 17:41:14 +0000 (0:00:00.805) 0:06:00.252 *********** 2025-06-02 17:41:54.092457 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:41:54.092466 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:41:54.092472 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:41:54.092478 | orchestrator | 2025-06-02 17:41:54.092483 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-06-02 17:41:54.092489 | orchestrator | Monday 02 June 2025 17:41:23 +0000 (0:00:09.430) 0:06:09.683 *********** 2025-06-02 17:41:54.092494 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:41:54.092500 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:41:54.092506 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:41:54.092511 | orchestrator | 2025-06-02 17:41:54.092517 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-06-02 17:41:54.092522 | orchestrator | Monday 02 June 2025 17:41:24 +0000 (0:00:00.732) 0:06:10.415 *********** 2025-06-02 17:41:54.092527 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:41:54.092533 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:41:54.092538 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:41:54.092544 | orchestrator | 2025-06-02 17:41:54.092549 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-06-02 17:41:54.092555 | orchestrator | Monday 02 June 2025 17:41:33 +0000 (0:00:08.870) 0:06:19.286 *********** 2025-06-02 17:41:54.092560 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:41:54.092566 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:41:54.092571 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:41:54.092577 | orchestrator | 2025-06-02 17:41:54.092582 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-06-02 17:41:54.092588 | orchestrator | Monday 02 June 2025 17:41:37 +0000 (0:00:04.709) 0:06:23.995 *********** 2025-06-02 17:41:54.092593 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:41:54.092599 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:41:54.092604 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:41:54.092610 | orchestrator | 2025-06-02 17:41:54.092615 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-06-02 17:41:54.092639 | orchestrator | Monday 02 June 2025 17:41:47 +0000 (0:00:09.480) 0:06:33.476 *********** 2025-06-02 17:41:54.092648 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.092656 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.092673 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.092683 | orchestrator | 2025-06-02 17:41:54.092691 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-06-02 17:41:54.092699 | orchestrator | Monday 02 June 2025 17:41:47 +0000 (0:00:00.350) 0:06:33.827 *********** 2025-06-02 17:41:54.092707 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.092715 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.092723 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.092731 | orchestrator | 2025-06-02 17:41:54.092738 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-06-02 17:41:54.092746 | orchestrator | Monday 02 June 2025 17:41:48 +0000 (0:00:00.741) 0:06:34.568 *********** 2025-06-02 17:41:54.092754 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.092762 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.092778 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.092787 | orchestrator | 2025-06-02 17:41:54.092796 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-06-02 17:41:54.092804 | orchestrator | Monday 02 June 2025 17:41:48 +0000 (0:00:00.383) 0:06:34.952 *********** 2025-06-02 17:41:54.092812 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.092819 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.092827 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.092836 | orchestrator | 2025-06-02 17:41:54.092841 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-06-02 17:41:54.092846 | orchestrator | Monday 02 June 2025 17:41:49 +0000 (0:00:00.468) 0:06:35.420 *********** 2025-06-02 17:41:54.092851 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.092855 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.092860 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.092865 | orchestrator | 2025-06-02 17:41:54.092869 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-06-02 17:41:54.092874 | orchestrator | Monday 02 June 2025 17:41:49 +0000 (0:00:00.377) 0:06:35.798 *********** 2025-06-02 17:41:54.092879 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:41:54.092884 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:41:54.092888 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:41:54.092893 | orchestrator | 2025-06-02 17:41:54.092898 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-06-02 17:41:54.092903 | orchestrator | Monday 02 June 2025 17:41:50 +0000 (0:00:00.700) 0:06:36.498 *********** 2025-06-02 17:41:54.092907 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:41:54.092912 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:41:54.092917 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:41:54.092922 | orchestrator | 2025-06-02 17:41:54.092926 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-06-02 17:41:54.092931 | orchestrator | Monday 02 June 2025 17:41:51 +0000 (0:00:00.934) 0:06:37.433 *********** 2025-06-02 17:41:54.092936 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:41:54.092941 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:41:54.092946 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:41:54.092950 | orchestrator | 2025-06-02 17:41:54.092955 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:41:54.092960 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-02 17:41:54.092965 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-02 17:41:54.092970 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-02 17:41:54.092980 | orchestrator | 2025-06-02 17:41:54.092985 | orchestrator | 2025-06-02 17:41:54.092990 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:41:54.092995 | orchestrator | Monday 02 June 2025 17:41:51 +0000 (0:00:00.771) 0:06:38.204 *********** 2025-06-02 17:41:54.093004 | orchestrator | =============================================================================== 2025-06-02 17:41:54.093009 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.48s 2025-06-02 17:41:54.093013 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.43s 2025-06-02 17:41:54.093018 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 8.87s 2025-06-02 17:41:54.093023 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 6.05s 2025-06-02 17:41:54.093027 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.03s 2025-06-02 17:41:54.093032 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.60s 2025-06-02 17:41:54.093037 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 5.50s 2025-06-02 17:41:54.093042 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.86s 2025-06-02 17:41:54.093046 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.72s 2025-06-02 17:41:54.093051 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 4.71s 2025-06-02 17:41:54.093056 | orchestrator | haproxy-config : Copying over grafana haproxy config -------------------- 4.71s 2025-06-02 17:41:54.093061 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 4.70s 2025-06-02 17:41:54.093065 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.37s 2025-06-02 17:41:54.093070 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.35s 2025-06-02 17:41:54.093075 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.33s 2025-06-02 17:41:54.093080 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.29s 2025-06-02 17:41:54.093084 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 4.15s 2025-06-02 17:41:54.093089 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.03s 2025-06-02 17:41:54.093094 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.93s 2025-06-02 17:41:54.093099 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 3.90s 2025-06-02 17:41:57.137100 | orchestrator | 2025-06-02 17:41:57 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:41:57.138885 | orchestrator | 2025-06-02 17:41:57 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:41:57.141169 | orchestrator | 2025-06-02 17:41:57 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:41:57.141203 | orchestrator | 2025-06-02 17:41:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:42:00.193993 | orchestrator | 2025-06-02 17:42:00 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:42:00.194499 | orchestrator | 2025-06-02 17:42:00 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:42:00.196158 | orchestrator | 2025-06-02 17:42:00 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:42:00.196495 | orchestrator | 2025-06-02 17:42:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:42:03.251366 | orchestrator | 2025-06-02 17:42:03 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:42:03.252290 | orchestrator | 2025-06-02 17:42:03 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:42:03.253985 | orchestrator | 2025-06-02 17:42:03 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:42:03.254012 | orchestrator | 2025-06-02 17:42:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:42:06.306583 | orchestrator | 2025-06-02 17:42:06 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:42:06.307288 | orchestrator | 2025-06-02 17:42:06 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:42:06.313054 | orchestrator | 2025-06-02 17:42:06 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:42:06.313117 | orchestrator | 2025-06-02 17:42:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:42:09.352123 | orchestrator | 2025-06-02 17:42:09 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:42:09.352702 | orchestrator | 2025-06-02 17:42:09 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:42:09.354654 | orchestrator | 2025-06-02 17:42:09 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:42:09.354695 | orchestrator | 2025-06-02 17:42:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:42:12.398739 | orchestrator | 2025-06-02 17:42:12 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:42:12.399708 | orchestrator | 2025-06-02 17:42:12 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:42:12.400976 | orchestrator | 2025-06-02 17:42:12 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:42:12.401062 | orchestrator | 2025-06-02 17:42:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:42:15.448828 | orchestrator | 2025-06-02 17:42:15 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:42:15.450384 | orchestrator | 2025-06-02 17:42:15 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:42:15.451811 | orchestrator | 2025-06-02 17:42:15 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:42:15.451923 | orchestrator | 2025-06-02 17:42:15 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:42:18.494868 | orchestrator | 2025-06-02 17:42:18 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:42:18.494982 | orchestrator | 2025-06-02 17:42:18 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:42:18.497627 | orchestrator | 2025-06-02 17:42:18 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:42:18.497695 | orchestrator | 2025-06-02 17:42:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:42:21.554934 | orchestrator | 2025-06-02 17:42:21 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:42:21.556123 | orchestrator | 2025-06-02 17:42:21 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:42:21.558896 | orchestrator | 2025-06-02 17:42:21 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:42:21.558964 | orchestrator | 2025-06-02 17:42:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:42:24.598868 | orchestrator | 2025-06-02 17:42:24 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:42:24.600151 | orchestrator | 2025-06-02 17:42:24 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:42:24.602400 | orchestrator | 2025-06-02 17:42:24 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:42:24.602506 | orchestrator | 2025-06-02 17:42:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:42:27.663809 | orchestrator | 2025-06-02 17:42:27 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:42:27.665802 | orchestrator | 2025-06-02 17:42:27 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:42:27.667079 | orchestrator | 2025-06-02 17:42:27 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:42:27.667129 | orchestrator | 2025-06-02 17:42:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:42:30.712065 | orchestrator | 2025-06-02 17:42:30 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:42:30.714285 | orchestrator | 2025-06-02 17:42:30 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:42:30.715619 | orchestrator | 2025-06-02 17:42:30 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:42:30.716150 | orchestrator | 2025-06-02 17:42:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:42:33.755439 | orchestrator | 2025-06-02 17:42:33 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:42:33.756502 | orchestrator | 2025-06-02 17:42:33 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:42:33.757946 | orchestrator | 2025-06-02 17:42:33 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:42:33.758550 | orchestrator | 2025-06-02 17:42:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:42:36.807677 | orchestrator | 2025-06-02 17:42:36 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:42:36.809768 | orchestrator | 2025-06-02 17:42:36 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:42:36.811183 | orchestrator | 2025-06-02 17:42:36 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:42:36.812382 | orchestrator | 2025-06-02 17:42:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:42:39.856881 | orchestrator | 2025-06-02 17:42:39 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:42:39.859414 | orchestrator | 2025-06-02 17:42:39 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:42:39.861706 | orchestrator | 2025-06-02 17:42:39 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:42:39.861784 | orchestrator | 2025-06-02 17:42:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:42:42.912776 | orchestrator | 2025-06-02 17:42:42 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:42:42.917740 | orchestrator | 2025-06-02 17:42:42 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:42:42.917969 | orchestrator | 2025-06-02 17:42:42 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:42:42.918608 | orchestrator | 2025-06-02 17:42:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:42:45.971337 | orchestrator | 2025-06-02 17:42:45 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:42:45.976219 | orchestrator | 2025-06-02 17:42:45 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:42:45.977359 | orchestrator | 2025-06-02 17:42:45 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:42:45.977767 | orchestrator | 2025-06-02 17:42:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:42:49.042648 | orchestrator | 2025-06-02 17:42:49 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:42:49.042729 | orchestrator | 2025-06-02 17:42:49 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:42:49.042821 | orchestrator | 2025-06-02 17:42:49 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:42:49.044437 | orchestrator | 2025-06-02 17:42:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:42:52.097022 | orchestrator | 2025-06-02 17:42:52 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:42:52.099549 | orchestrator | 2025-06-02 17:42:52 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:42:52.101484 | orchestrator | 2025-06-02 17:42:52 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:42:52.101558 | orchestrator | 2025-06-02 17:42:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:42:55.153764 | orchestrator | 2025-06-02 17:42:55 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:42:55.155428 | orchestrator | 2025-06-02 17:42:55 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:42:55.158966 | orchestrator | 2025-06-02 17:42:55 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:42:55.159205 | orchestrator | 2025-06-02 17:42:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:42:58.206795 | orchestrator | 2025-06-02 17:42:58 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:42:58.211512 | orchestrator | 2025-06-02 17:42:58 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:42:58.215388 | orchestrator | 2025-06-02 17:42:58 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:42:58.215454 | orchestrator | 2025-06-02 17:42:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:43:01.261541 | orchestrator | 2025-06-02 17:43:01 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:43:01.261674 | orchestrator | 2025-06-02 17:43:01 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:43:01.262312 | orchestrator | 2025-06-02 17:43:01 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:43:01.262589 | orchestrator | 2025-06-02 17:43:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:43:04.316426 | orchestrator | 2025-06-02 17:43:04 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:43:04.318739 | orchestrator | 2025-06-02 17:43:04 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:43:04.321558 | orchestrator | 2025-06-02 17:43:04 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:43:04.321617 | orchestrator | 2025-06-02 17:43:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:43:07.365788 | orchestrator | 2025-06-02 17:43:07 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:43:07.367524 | orchestrator | 2025-06-02 17:43:07 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:43:07.369683 | orchestrator | 2025-06-02 17:43:07 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:43:07.369722 | orchestrator | 2025-06-02 17:43:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:43:10.423335 | orchestrator | 2025-06-02 17:43:10 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:43:10.426075 | orchestrator | 2025-06-02 17:43:10 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:43:10.427593 | orchestrator | 2025-06-02 17:43:10 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:43:10.427650 | orchestrator | 2025-06-02 17:43:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:43:13.484302 | orchestrator | 2025-06-02 17:43:13 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:43:13.487692 | orchestrator | 2025-06-02 17:43:13 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:43:13.489205 | orchestrator | 2025-06-02 17:43:13 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:43:13.489242 | orchestrator | 2025-06-02 17:43:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:43:16.547537 | orchestrator | 2025-06-02 17:43:16 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:43:16.548695 | orchestrator | 2025-06-02 17:43:16 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:43:16.552063 | orchestrator | 2025-06-02 17:43:16 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:43:16.552304 | orchestrator | 2025-06-02 17:43:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:43:19.600917 | orchestrator | 2025-06-02 17:43:19 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:43:19.602794 | orchestrator | 2025-06-02 17:43:19 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:43:19.604004 | orchestrator | 2025-06-02 17:43:19 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:43:19.604153 | orchestrator | 2025-06-02 17:43:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:43:22.644321 | orchestrator | 2025-06-02 17:43:22 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:43:22.646864 | orchestrator | 2025-06-02 17:43:22 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:43:22.650199 | orchestrator | 2025-06-02 17:43:22 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:43:22.650250 | orchestrator | 2025-06-02 17:43:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:43:25.694762 | orchestrator | 2025-06-02 17:43:25 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:43:25.696295 | orchestrator | 2025-06-02 17:43:25 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:43:25.698296 | orchestrator | 2025-06-02 17:43:25 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:43:25.698325 | orchestrator | 2025-06-02 17:43:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:43:28.745554 | orchestrator | 2025-06-02 17:43:28 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:43:28.751347 | orchestrator | 2025-06-02 17:43:28 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:43:28.751435 | orchestrator | 2025-06-02 17:43:28 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:43:28.751451 | orchestrator | 2025-06-02 17:43:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:43:31.806632 | orchestrator | 2025-06-02 17:43:31 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:43:31.808851 | orchestrator | 2025-06-02 17:43:31 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:43:31.810638 | orchestrator | 2025-06-02 17:43:31 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:43:31.810691 | orchestrator | 2025-06-02 17:43:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:43:34.862314 | orchestrator | 2025-06-02 17:43:34 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:43:34.865527 | orchestrator | 2025-06-02 17:43:34 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:43:34.867971 | orchestrator | 2025-06-02 17:43:34 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:43:34.868029 | orchestrator | 2025-06-02 17:43:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:43:37.912574 | orchestrator | 2025-06-02 17:43:37 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:43:37.915232 | orchestrator | 2025-06-02 17:43:37 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:43:37.917483 | orchestrator | 2025-06-02 17:43:37 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:43:37.917537 | orchestrator | 2025-06-02 17:43:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:43:40.959076 | orchestrator | 2025-06-02 17:43:40 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:43:40.961358 | orchestrator | 2025-06-02 17:43:40 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:43:40.963118 | orchestrator | 2025-06-02 17:43:40 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:43:40.963144 | orchestrator | 2025-06-02 17:43:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:43:44.013869 | orchestrator | 2025-06-02 17:43:44 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:43:44.016676 | orchestrator | 2025-06-02 17:43:44 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:43:44.019876 | orchestrator | 2025-06-02 17:43:44 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:43:44.019948 | orchestrator | 2025-06-02 17:43:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:43:47.072228 | orchestrator | 2025-06-02 17:43:47 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:43:47.073602 | orchestrator | 2025-06-02 17:43:47 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:43:47.075727 | orchestrator | 2025-06-02 17:43:47 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:43:47.075896 | orchestrator | 2025-06-02 17:43:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:43:50.117514 | orchestrator | 2025-06-02 17:43:50 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:43:50.118725 | orchestrator | 2025-06-02 17:43:50 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:43:50.120563 | orchestrator | 2025-06-02 17:43:50 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:43:50.120613 | orchestrator | 2025-06-02 17:43:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:43:53.165779 | orchestrator | 2025-06-02 17:43:53 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:43:53.167831 | orchestrator | 2025-06-02 17:43:53 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:43:53.169921 | orchestrator | 2025-06-02 17:43:53 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:43:53.170386 | orchestrator | 2025-06-02 17:43:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:43:56.223379 | orchestrator | 2025-06-02 17:43:56 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state STARTED 2025-06-02 17:43:56.223479 | orchestrator | 2025-06-02 17:43:56 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:43:56.223518 | orchestrator | 2025-06-02 17:43:56 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:43:56.223584 | orchestrator | 2025-06-02 17:43:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:43:59.278456 | orchestrator | 2025-06-02 17:43:59 | INFO  | Task bd0b7529-f0eb-44c5-ba18-cf9d0c6f9bc5 is in state SUCCESS 2025-06-02 17:43:59.279563 | orchestrator | 2025-06-02 17:43:59.279637 | orchestrator | 2025-06-02 17:43:59.279649 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-06-02 17:43:59.279659 | orchestrator | 2025-06-02 17:43:59.279668 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-06-02 17:43:59.279676 | orchestrator | Monday 02 June 2025 17:32:42 +0000 (0:00:00.714) 0:00:00.714 *********** 2025-06-02 17:43:59.279686 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:43:59.279695 | orchestrator | 2025-06-02 17:43:59.279703 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-06-02 17:43:59.279725 | orchestrator | Monday 02 June 2025 17:32:43 +0000 (0:00:00.984) 0:00:01.699 *********** 2025-06-02 17:43:59.279734 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.279743 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.279751 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.279759 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.279767 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.279791 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.279799 | orchestrator | 2025-06-02 17:43:59.279808 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-06-02 17:43:59.279826 | orchestrator | Monday 02 June 2025 17:32:44 +0000 (0:00:01.570) 0:00:03.269 *********** 2025-06-02 17:43:59.279834 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.279842 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.279850 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.279858 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.279866 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.279873 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.279881 | orchestrator | 2025-06-02 17:43:59.279889 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-06-02 17:43:59.279897 | orchestrator | Monday 02 June 2025 17:32:45 +0000 (0:00:00.946) 0:00:04.216 *********** 2025-06-02 17:43:59.279905 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.279913 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.279921 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.279928 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.279936 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.279944 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.279952 | orchestrator | 2025-06-02 17:43:59.279960 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-06-02 17:43:59.279968 | orchestrator | Monday 02 June 2025 17:32:46 +0000 (0:00:01.202) 0:00:05.418 *********** 2025-06-02 17:43:59.279976 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.279984 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.279992 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.280000 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.280074 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.280085 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.280094 | orchestrator | 2025-06-02 17:43:59.280103 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-06-02 17:43:59.280113 | orchestrator | Monday 02 June 2025 17:32:47 +0000 (0:00:00.972) 0:00:06.391 *********** 2025-06-02 17:43:59.280122 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.280131 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.280140 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.280149 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.280159 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.280167 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.280177 | orchestrator | 2025-06-02 17:43:59.280186 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-06-02 17:43:59.280195 | orchestrator | Monday 02 June 2025 17:32:48 +0000 (0:00:00.718) 0:00:07.109 *********** 2025-06-02 17:43:59.280205 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.280214 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.280223 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.280231 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.280241 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.280250 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.280259 | orchestrator | 2025-06-02 17:43:59.280268 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-06-02 17:43:59.280277 | orchestrator | Monday 02 June 2025 17:32:49 +0000 (0:00:00.970) 0:00:08.079 *********** 2025-06-02 17:43:59.280286 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.280296 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.280305 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.280314 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.280323 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.280332 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.280341 | orchestrator | 2025-06-02 17:43:59.280351 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-06-02 17:43:59.280360 | orchestrator | Monday 02 June 2025 17:32:50 +0000 (0:00:00.899) 0:00:08.979 *********** 2025-06-02 17:43:59.280369 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.280378 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.280387 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.280396 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.280406 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.280415 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.280424 | orchestrator | 2025-06-02 17:43:59.280433 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-06-02 17:43:59.280442 | orchestrator | Monday 02 June 2025 17:32:51 +0000 (0:00:01.056) 0:00:10.036 *********** 2025-06-02 17:43:59.280452 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 17:43:59.280463 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 17:43:59.280472 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 17:43:59.280481 | orchestrator | 2025-06-02 17:43:59.280489 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-06-02 17:43:59.280497 | orchestrator | Monday 02 June 2025 17:32:52 +0000 (0:00:00.704) 0:00:10.740 *********** 2025-06-02 17:43:59.280505 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.280513 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.280520 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.280528 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.280537 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.280544 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.280552 | orchestrator | 2025-06-02 17:43:59.280571 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-06-02 17:43:59.280580 | orchestrator | Monday 02 June 2025 17:32:53 +0000 (0:00:01.284) 0:00:12.025 *********** 2025-06-02 17:43:59.280595 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 17:43:59.280603 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 17:43:59.280611 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 17:43:59.280619 | orchestrator | 2025-06-02 17:43:59.280627 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-06-02 17:43:59.280641 | orchestrator | Monday 02 June 2025 17:32:56 +0000 (0:00:03.160) 0:00:15.186 *********** 2025-06-02 17:43:59.280649 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-02 17:43:59.280657 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-02 17:43:59.280665 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-02 17:43:59.280673 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.280681 | orchestrator | 2025-06-02 17:43:59.280689 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-06-02 17:43:59.280697 | orchestrator | Monday 02 June 2025 17:32:57 +0000 (0:00:00.654) 0:00:15.841 *********** 2025-06-02 17:43:59.280706 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.280717 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.280725 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.280733 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.280741 | orchestrator | 2025-06-02 17:43:59.280749 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-06-02 17:43:59.280757 | orchestrator | Monday 02 June 2025 17:32:58 +0000 (0:00:00.976) 0:00:16.817 *********** 2025-06-02 17:43:59.280767 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.280778 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.280786 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.280795 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.280803 | orchestrator | 2025-06-02 17:43:59.280810 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-06-02 17:43:59.280818 | orchestrator | Monday 02 June 2025 17:32:58 +0000 (0:00:00.568) 0:00:17.386 *********** 2025-06-02 17:43:59.280829 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-06-02 17:32:53.993848', 'end': '2025-06-02 17:32:54.281987', 'delta': '0:00:00.288139', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.280858 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-06-02 17:32:55.057084', 'end': '2025-06-02 17:32:55.343167', 'delta': '0:00:00.286083', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.280867 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-06-02 17:32:55.975497', 'end': '2025-06-02 17:32:56.254088', 'delta': '0:00:00.278591', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.280876 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.280884 | orchestrator | 2025-06-02 17:43:59.280892 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-06-02 17:43:59.280900 | orchestrator | Monday 02 June 2025 17:32:59 +0000 (0:00:00.324) 0:00:17.711 *********** 2025-06-02 17:43:59.280908 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.280916 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.280924 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.280932 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.280940 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.280948 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.280956 | orchestrator | 2025-06-02 17:43:59.280964 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-06-02 17:43:59.280972 | orchestrator | Monday 02 June 2025 17:33:00 +0000 (0:00:01.783) 0:00:19.494 *********** 2025-06-02 17:43:59.280980 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.280988 | orchestrator | 2025-06-02 17:43:59.280996 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-06-02 17:43:59.281004 | orchestrator | Monday 02 June 2025 17:33:01 +0000 (0:00:00.822) 0:00:20.317 *********** 2025-06-02 17:43:59.281012 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.281043 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.281052 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.281060 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.281068 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.281076 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.281084 | orchestrator | 2025-06-02 17:43:59.281092 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-06-02 17:43:59.281100 | orchestrator | Monday 02 June 2025 17:33:02 +0000 (0:00:00.977) 0:00:21.294 *********** 2025-06-02 17:43:59.281108 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.281122 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.281130 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.281138 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.281146 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.281154 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.281162 | orchestrator | 2025-06-02 17:43:59.281170 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-02 17:43:59.281177 | orchestrator | Monday 02 June 2025 17:33:04 +0000 (0:00:01.442) 0:00:22.737 *********** 2025-06-02 17:43:59.281185 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.281193 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.281201 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.281209 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.281217 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.281225 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.281233 | orchestrator | 2025-06-02 17:43:59.281241 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-06-02 17:43:59.281249 | orchestrator | Monday 02 June 2025 17:33:04 +0000 (0:00:00.908) 0:00:23.645 *********** 2025-06-02 17:43:59.281257 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.281265 | orchestrator | 2025-06-02 17:43:59.281272 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-06-02 17:43:59.281280 | orchestrator | Monday 02 June 2025 17:33:05 +0000 (0:00:00.288) 0:00:23.934 *********** 2025-06-02 17:43:59.281289 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.281296 | orchestrator | 2025-06-02 17:43:59.281304 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-02 17:43:59.281312 | orchestrator | Monday 02 June 2025 17:33:05 +0000 (0:00:00.359) 0:00:24.294 *********** 2025-06-02 17:43:59.281320 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.281329 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.281337 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.281345 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.281354 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.281362 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.281370 | orchestrator | 2025-06-02 17:43:59.281378 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-06-02 17:43:59.281401 | orchestrator | Monday 02 June 2025 17:33:06 +0000 (0:00:01.083) 0:00:25.377 *********** 2025-06-02 17:43:59.281410 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.281418 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.281426 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.281434 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.281442 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.281450 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.281458 | orchestrator | 2025-06-02 17:43:59.281466 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-06-02 17:43:59.281473 | orchestrator | Monday 02 June 2025 17:33:07 +0000 (0:00:01.157) 0:00:26.535 *********** 2025-06-02 17:43:59.281481 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.281494 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.281504 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.281511 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.281519 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.281527 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.281535 | orchestrator | 2025-06-02 17:43:59.281543 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-06-02 17:43:59.281551 | orchestrator | Monday 02 June 2025 17:33:08 +0000 (0:00:00.747) 0:00:27.282 *********** 2025-06-02 17:43:59.281559 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.281567 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.281575 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.281584 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.281596 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.281604 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.281613 | orchestrator | 2025-06-02 17:43:59.281620 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-06-02 17:43:59.281628 | orchestrator | Monday 02 June 2025 17:33:09 +0000 (0:00:00.857) 0:00:28.139 *********** 2025-06-02 17:43:59.281636 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.281644 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.281652 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.281660 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.281667 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.281675 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.281683 | orchestrator | 2025-06-02 17:43:59.281691 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-06-02 17:43:59.281699 | orchestrator | Monday 02 June 2025 17:33:10 +0000 (0:00:00.763) 0:00:28.903 *********** 2025-06-02 17:43:59.281707 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.281715 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.281723 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.281731 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.281739 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.281747 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.281754 | orchestrator | 2025-06-02 17:43:59.281763 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-06-02 17:43:59.281771 | orchestrator | Monday 02 June 2025 17:33:11 +0000 (0:00:00.950) 0:00:29.854 *********** 2025-06-02 17:43:59.281779 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.281787 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.281795 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.281803 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.281810 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.281818 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.281826 | orchestrator | 2025-06-02 17:43:59.281835 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-06-02 17:43:59.281843 | orchestrator | Monday 02 June 2025 17:33:11 +0000 (0:00:00.784) 0:00:30.638 *********** 2025-06-02 17:43:59.281851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:43:59.281861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:43:59.281869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:43:59.281878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:43:59.281904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:43:59.281917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:43:59.281926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:43:59.281934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:43:59.281946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ed89d00-d2b1-4316-9e61-ba744145484e', 'scsi-SQEMU_QEMU_HARDDISK_9ed89d00-d2b1-4316-9e61-ba744145484e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ed89d00-d2b1-4316-9e61-ba744145484e-part1', 'scsi-SQEMU_QEMU_HARDDISK_9ed89d00-d2b1-4316-9e61-ba744145484e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ed89d00-d2b1-4316-9e61-ba744145484e-part14', 'scsi-SQEMU_QEMU_HARDDISK_9ed89d00-d2b1-4316-9e61-ba744145484e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ed89d00-d2b1-4316-9e61-ba744145484e-part15', 'scsi-SQEMU_QEMU_HARDDISK_9ed89d00-d2b1-4316-9e61-ba744145484e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ed89d00-d2b1-4316-9e61-ba744145484e-part16', 'scsi-SQEMU_QEMU_HARDDISK_9ed89d00-d2b1-4316-9e61-ba744145484e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:43:59.281964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-16-53-32-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:43:59.281982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:43:59.281991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:43:59.282000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:43:59.282008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:43:59.282099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:43:59.282109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:43:59.282118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:43:59.282127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:43:59.282156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d916e83a-af5b-4ece-a73c-3cfc7c74b767', 'scsi-SQEMU_QEMU_HARDDISK_d916e83a-af5b-4ece-a73c-3cfc7c74b767'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d916e83a-af5b-4ece-a73c-3cfc7c74b767-part1', 'scsi-SQEMU_QEMU_HARDDISK_d916e83a-af5b-4ece-a73c-3cfc7c74b767-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d916e83a-af5b-4ece-a73c-3cfc7c74b767-part14', 'scsi-SQEMU_QEMU_HARDDISK_d916e83a-af5b-4ece-a73c-3cfc7c74b767-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d916e83a-af5b-4ece-a73c-3cfc7c74b767-part15', 'scsi-SQEMU_QEMU_HARDDISK_d916e83a-af5b-4ece-a73c-3cfc7c74b767-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d916e83a-af5b-4ece-a73c-3cfc7c74b767-part16', 'scsi-SQEMU_QEMU_HARDDISK_d916e83a-af5b-4ece-a73c-3cfc7c74b767-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:43:59.282167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-16-53-30-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:43:59.282176 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.282185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:43:59.282194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:43:59.282202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:43:59.282215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:43:59.282229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:43:59.282242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:43:59.282252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:43:59.282261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:43:59.282271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_719d5228-8def-49aa-934d-4d9ae9a2b478', 'scsi-SQEMU_QEMU_HARDDISK_719d5228-8def-49aa-934d-4d9ae9a2b478'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_719d5228-8def-49aa-934d-4d9ae9a2b478-part1', 'scsi-SQEMU_QEMU_HARDDISK_719d5228-8def-49aa-934d-4d9ae9a2b478-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_719d5228-8def-49aa-934d-4d9ae9a2b478-part14', 'scsi-SQEMU_QEMU_HARDDISK_719d5228-8def-49aa-934d-4d9ae9a2b478-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_719d5228-8def-49aa-934d-4d9ae9a2b478-part15', 'scsi-SQEMU_QEMU_HARDDISK_719d5228-8def-49aa-934d-4d9ae9a2b478-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_719d5228-8def-49aa-934d-4d9ae9a2b478-part16', 'scsi-SQEMU_QEMU_HARDDISK_719d5228-8def-49aa-934d-4d9ae9a2b478-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:43:59.282290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-16-53-34-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:43:59.282299 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.282311 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--94958c5d--ab49--5ebf--a5cb--ef67fe0a9704-osd--block--94958c5d--ab49--5ebf--a5cb--ef67fe0a9704', 'dm-uuid-LVM-KMmsn0EVITsGj9TWOXyYzPFcl9Vg8RYvuZnGX1fEon7QrG8BXfWLQNyn31cle28T'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 17:43:59.282322 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--42dde184--17ae--50b7--8921--f17969f5efd9-osd--block--42dde184--17ae--50b7--8921--f17969f5efd9', 'dm-uuid-LVM-CESb8QC4Tp8nXi0PF2s5S4xvHCsfRXnP3wjEcSkbBJWdn2phWRkcvR7USA0zDhtB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 17:43:59.282331 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:43:59.282340 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:43:59.282349 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:43:59.282357 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.282365 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:43:59.282379 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:43:59.282388 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:43:59.282403 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:43:59.282416 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--de836c00--0412--5e15--aa8a--abef9bebfb26-osd--block--de836c00--0412--5e15--aa8a--abef9bebfb26', 'dm-uuid-LVM-1VZYIg7KCwGMXSKssoRinN9zS5U8TxXk9Uvj5DuJRlLOZWdlspbHlbvb9xrYZJt2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 17:43:59.282425 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:43:59.282434 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c404b240--9cf0--5c0e--97ba--c570a8ba4cd9-osd--block--c404b240--9cf0--5c0e--97ba--c570a8ba4cd9', 'dm-uuid-LVM-Yn18L1MERL5p93hCY1551alTwNNRtouMaJhiE4ZDnlFkO3T4lsYdSaRGsHed8tf2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 17:43:59.282443 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:43:59.282458 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc2aa846-a38d-43a1-9fee-c1088582d602', 'scsi-SQEMU_QEMU_HARDDISK_dc2aa846-a38d-43a1-9fee-c1088582d602'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc2aa846-a38d-43a1-9fee-c1088582d602-part1', 'scsi-SQEMU_QEMU_HARDDISK_dc2aa846-a38d-43a1-9fee-c1088582d602-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc2aa846-a38d-43a1-9fee-c1088582d602-part14', 'scsi-SQEMU_QEMU_HARDDISK_dc2aa846-a38d-43a1-9fee-c1088582d602-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc2aa846-a38d-43a1-9fee-c1088582d602-part15', 'scsi-SQEMU_QEMU_HARDDISK_dc2aa846-a38d-43a1-9fee-c1088582d602-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc2aa846-a38d-43a1-9fee-c1088582d602-part16', 'scsi-SQEMU_QEMU_HARDDISK_dc2aa846-a38d-43a1-9fee-c1088582d602-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:43:59.282476 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:43:59.282486 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:43:59.282495 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--94958c5d--ab49--5ebf--a5cb--ef67fe0a9704-osd--block--94958c5d--ab49--5ebf--a5cb--ef67fe0a9704'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-KeHdNZ-tekv-q3Jm-pKmi-C8MP-DuHa-KUx04F', 'scsi-0QEMU_QEMU_HARDDISK_f15aa92f-a864-46a7-a446-d151182076d1', 'scsi-SQEMU_QEMU_HARDDISK_f15aa92f-a864-46a7-a446-d151182076d1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:43:59.282505 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:43:59.282514 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--42dde184--17ae--50b7--8921--f17969f5efd9-osd--block--42dde184--17ae--50b7--8921--f17969f5efd9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FSVcen-xfak-l0K6-V65O-0nOf-M99l-6K8YWo', 'scsi-0QEMU_QEMU_HARDDISK_abb01d95-8fd4-488e-8b6c-7cb2a7271361', 'scsi-SQEMU_QEMU_HARDDISK_abb01d95-8fd4-488e-8b6c-7cb2a7271361'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:43:59.282528 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:43:59.282538 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:43:59.282551 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:43:59.282567 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:43:59.282577 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5d913f80-ed99-4f7f-af77-a272e71d6767', 'scsi-SQEMU_QEMU_HARDDISK_5d913f80-ed99-4f7f-af77-a272e71d6767'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:43:59.282586 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ce206cb-87d4-44fa-8b19-ffddc5f2b300', 'scsi-SQEMU_QEMU_HARDDISK_7ce206cb-87d4-44fa-8b19-ffddc5f2b300'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ce206cb-87d4-44fa-8b19-ffddc5f2b300-part1', 'scsi-SQEMU_QEMU_HARDDISK_7ce206cb-87d4-44fa-8b19-ffddc5f2b300-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ce206cb-87d4-44fa-8b19-ffddc5f2b300-part14', 'scsi-SQEMU_QEMU_HARDDISK_7ce206cb-87d4-44fa-8b19-ffddc5f2b300-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ce206cb-87d4-44fa-8b19-ffddc5f2b300-part15', 'scsi-SQEMU_QEMU_HARDDISK_7ce206cb-87d4-44fa-8b19-ffddc5f2b300-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ce206cb-87d4-44fa-8b19-ffddc5f2b300-part16', 'scsi-SQEMU_QEMU_HARDDISK_7ce206cb-87d4-44fa-8b19-ffddc5f2b300-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:43:59.282606 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--de836c00--0412--5e15--aa8a--abef9bebfb26-osd--block--de836c00--0412--5e15--aa8a--abef9bebfb26'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PWQM6B-jy51-yHR4-Xcur-JWGt-c4rk-j5fZG9', 'scsi-0QEMU_QEMU_HARDDISK_37a5ef51-3790-4474-9294-da6668d88e33', 'scsi-SQEMU_QEMU_HARDDISK_37a5ef51-3790-4474-9294-da6668d88e33'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:43:59.282620 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--c404b240--9cf0--5c0e--97ba--c570a8ba4cd9-osd--block--c404b240--9cf0--5c0e--97ba--c570a8ba4cd9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cYdt4D-B5Wq-Mjwb-9Ydz-e3BM-44vE-VXd1px', 'scsi-0QEMU_QEMU_HARDDISK_8b34934e-11eb-4c36-8207-511a42fe0f38', 'scsi-SQEMU_QEMU_HARDDISK_8b34934e-11eb-4c36-8207-511a42fe0f38'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:43:59.282629 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d22e3547-dc50-4b67-b48e-5886da7d5148', 'scsi-SQEMU_QEMU_HARDDISK_d22e3547-dc50-4b67-b48e-5886da7d5148'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:43:59.282638 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-16-53-36-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:43:59.282648 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-16-53-28-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:43:59.282661 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.282669 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.282679 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--33d58ee2--4c10--58b1--ba9c--becc4d68c01c-osd--block--33d58ee2--4c10--58b1--ba9c--becc4d68c01c', 'dm-uuid-LVM-b2DUe6pPjWw4q9EUJVUjIvE3Me0qGzC9JNcGY7fMyv8yeJzKZcRP1q95YMxjL7oH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 17:43:59.282687 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a4a4ffc0--4b1a--5123--a777--2de0f9f46a6b-osd--block--a4a4ffc0--4b1a--5123--a777--2de0f9f46a6b', 'dm-uuid-LVM-oXW0HnudB9NGFV2CziApkCUlse954NVKg0dAUucQMXIjGY5IE8PcBcxv61Xaa3tO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 17:43:59.282701 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:43:59.282713 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:43:59.282722 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:43:59.282731 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:43:59.282739 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:43:59.282753 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:43:59.282762 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:43:59.282770 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:43:59.282789 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33e5408c-eac4-45cf-8284-ea43471071f8', 'scsi-SQEMU_QEMU_HARDDISK_33e5408c-eac4-45cf-8284-ea43471071f8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33e5408c-eac4-45cf-8284-ea43471071f8-part1', 'scsi-SQEMU_QEMU_HARDDISK_33e5408c-eac4-45cf-8284-ea43471071f8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33e5408c-eac4-45cf-8284-ea43471071f8-part14', 'scsi-SQEMU_QEMU_HARDDISK_33e5408c-eac4-45cf-8284-ea43471071f8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33e5408c-eac4-45cf-8284-ea43471071f8-part15', 'scsi-SQEMU_QEMU_HARDDISK_33e5408c-eac4-45cf-8284-ea43471071f8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33e5408c-eac4-45cf-8284-ea43471071f8-part16', 'scsi-SQEMU_QEMU_HARDDISK_33e5408c-eac4-45cf-8284-ea43471071f8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:43:59.282799 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--33d58ee2--4c10--58b1--ba9c--becc4d68c01c-osd--block--33d58ee2--4c10--58b1--ba9c--becc4d68c01c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gjnCCA-r0Z1-l49w-WvEU-R0jc-GTtC-9JSoTT', 'scsi-0QEMU_QEMU_HARDDISK_cc6b7f8a-a299-449d-8912-3815da19ff1f', 'scsi-SQEMU_QEMU_HARDDISK_cc6b7f8a-a299-449d-8912-3815da19ff1f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:43:59.282813 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a4a4ffc0--4b1a--5123--a777--2de0f9f46a6b-osd--block--a4a4ffc0--4b1a--5123--a777--2de0f9f46a6b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vYaIGf-yEgl-Ymyy-5uFH-5UfI-zYmZ-prR9B8', 'scsi-0QEMU_QEMU_HARDDISK_fb369b5e-a271-4fa4-9f85-1311171daecb', 'scsi-SQEMU_QEMU_HARDDISK_fb369b5e-a271-4fa4-9f85-1311171daecb'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:43:59.282822 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f5db02e-386c-41b9-ae07-b7cce6e0964a', 'scsi-SQEMU_QEMU_HARDDISK_6f5db02e-386c-41b9-ae07-b7cce6e0964a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:43:59.282831 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-16-53-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:43:59.282844 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.282853 | orchestrator | 2025-06-02 17:43:59.282861 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-06-02 17:43:59.282870 | orchestrator | Monday 02 June 2025 17:33:14 +0000 (0:00:02.599) 0:00:33.238 *********** 2025-06-02 17:43:59.282883 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.282892 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.282901 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.282914 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.282922 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.282931 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.282945 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.282958 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.282966 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.282981 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ed89d00-d2b1-4316-9e61-ba744145484e', 'scsi-SQEMU_QEMU_HARDDISK_9ed89d00-d2b1-4316-9e61-ba744145484e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ed89d00-d2b1-4316-9e61-ba744145484e-part1', 'scsi-SQEMU_QEMU_HARDDISK_9ed89d00-d2b1-4316-9e61-ba744145484e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ed89d00-d2b1-4316-9e61-ba744145484e-part14', 'scsi-SQEMU_QEMU_HARDDISK_9ed89d00-d2b1-4316-9e61-ba744145484e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ed89d00-d2b1-4316-9e61-ba744145484e-part15', 'scsi-SQEMU_QEMU_HARDDISK_9ed89d00-d2b1-4316-9e61-ba744145484e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ed89d00-d2b1-4316-9e61-ba744145484e-part16', 'scsi-SQEMU_QEMU_HARDDISK_9ed89d00-d2b1-4316-9e61-ba744145484e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.282996 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.283009 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.283083 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-16-53-32-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.283158 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.283169 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.283177 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.283192 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.283205 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284148 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d916e83a-af5b-4ece-a73c-3cfc7c74b767', 'scsi-SQEMU_QEMU_HARDDISK_d916e83a-af5b-4ece-a73c-3cfc7c74b767'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d916e83a-af5b-4ece-a73c-3cfc7c74b767-part1', 'scsi-SQEMU_QEMU_HARDDISK_d916e83a-af5b-4ece-a73c-3cfc7c74b767-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d916e83a-af5b-4ece-a73c-3cfc7c74b767-part14', 'scsi-SQEMU_QEMU_HARDDISK_d916e83a-af5b-4ece-a73c-3cfc7c74b767-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d916e83a-af5b-4ece-a73c-3cfc7c74b767-part15', 'scsi-SQEMU_QEMU_HARDDISK_d916e83a-af5b-4ece-a73c-3cfc7c74b767-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d916e83a-af5b-4ece-a73c-3cfc7c74b767-part16', 'scsi-SQEMU_QEMU_HARDDISK_d916e83a-af5b-4ece-a73c-3cfc7c74b767-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284193 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-16-53-30-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284201 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.284209 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284218 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284236 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284244 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284251 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284287 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284295 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284305 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284313 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.284325 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_719d5228-8def-49aa-934d-4d9ae9a2b478', 'scsi-SQEMU_QEMU_HARDDISK_719d5228-8def-49aa-934d-4d9ae9a2b478'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_719d5228-8def-49aa-934d-4d9ae9a2b478-part1', 'scsi-SQEMU_QEMU_HARDDISK_719d5228-8def-49aa-934d-4d9ae9a2b478-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_719d5228-8def-49aa-934d-4d9ae9a2b478-part14', 'scsi-SQEMU_QEMU_HARDDISK_719d5228-8def-49aa-934d-4d9ae9a2b478-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_719d5228-8def-49aa-934d-4d9ae9a2b478-part15', 'scsi-SQEMU_QEMU_HARDDISK_719d5228-8def-49aa-934d-4d9ae9a2b478-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_719d5228-8def-49aa-934d-4d9ae9a2b478-part16', 'scsi-SQEMU_QEMU_HARDDISK_719d5228-8def-49aa-934d-4d9ae9a2b478-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284339 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-16-53-34-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284350 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--94958c5d--ab49--5ebf--a5cb--ef67fe0a9704-osd--block--94958c5d--ab49--5ebf--a5cb--ef67fe0a9704', 'dm-uuid-LVM-KMmsn0EVITsGj9TWOXyYzPFcl9Vg8RYvuZnGX1fEon7QrG8BXfWLQNyn31cle28T'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284358 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--42dde184--17ae--50b7--8921--f17969f5efd9-osd--block--42dde184--17ae--50b7--8921--f17969f5efd9', 'dm-uuid-LVM-CESb8QC4Tp8nXi0PF2s5S4xvHCsfRXnP3wjEcSkbBJWdn2phWRkcvR7USA0zDhtB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284377 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284385 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284392 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284399 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284406 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284416 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284427 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.284435 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284447 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284455 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc2aa846-a38d-43a1-9fee-c1088582d602', 'scsi-SQEMU_QEMU_HARDDISK_dc2aa846-a38d-43a1-9fee-c1088582d602'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc2aa846-a38d-43a1-9fee-c1088582d602-part1', 'scsi-SQEMU_QEMU_HARDDISK_dc2aa846-a38d-43a1-9fee-c1088582d602-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc2aa846-a38d-43a1-9fee-c1088582d602-part14', 'scsi-SQEMU_QEMU_HARDDISK_dc2aa846-a38d-43a1-9fee-c1088582d602-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc2aa846-a38d-43a1-9fee-c1088582d602-part15', 'scsi-SQEMU_QEMU_HARDDISK_dc2aa846-a38d-43a1-9fee-c1088582d602-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc2aa846-a38d-43a1-9fee-c1088582d602-part16', 'scsi-SQEMU_QEMU_HARDDISK_dc2aa846-a38d-43a1-9fee-c1088582d602-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284467 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--94958c5d--ab49--5ebf--a5cb--ef67fe0a9704-osd--block--94958c5d--ab49--5ebf--a5cb--ef67fe0a9704'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-KeHdNZ-tekv-q3Jm-pKmi-C8MP-DuHa-KUx04F', 'scsi-0QEMU_QEMU_HARDDISK_f15aa92f-a864-46a7-a446-d151182076d1', 'scsi-SQEMU_QEMU_HARDDISK_f15aa92f-a864-46a7-a446-d151182076d1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284484 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--42dde184--17ae--50b7--8921--f17969f5efd9-osd--block--42dde184--17ae--50b7--8921--f17969f5efd9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FSVcen-xfak-l0K6-V65O-0nOf-M99l-6K8YWo', 'scsi-0QEMU_QEMU_HARDDISK_abb01d95-8fd4-488e-8b6c-7cb2a7271361', 'scsi-SQEMU_QEMU_HARDDISK_abb01d95-8fd4-488e-8b6c-7cb2a7271361'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284491 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5d913f80-ed99-4f7f-af77-a272e71d6767', 'scsi-SQEMU_QEMU_HARDDISK_5d913f80-ed99-4f7f-af77-a272e71d6767'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284498 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-16-53-36-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284509 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--de836c00--0412--5e15--aa8a--abef9bebfb26-osd--block--de836c00--0412--5e15--aa8a--abef9bebfb26', 'dm-uuid-LVM-1VZYIg7KCwGMXSKssoRinN9zS5U8TxXk9Uvj5DuJRlLOZWdlspbHlbvb9xrYZJt2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284520 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c404b240--9cf0--5c0e--97ba--c570a8ba4cd9-osd--block--c404b240--9cf0--5c0e--97ba--c570a8ba4cd9', 'dm-uuid-LVM-Yn18L1MERL5p93hCY1551alTwNNRtouMaJhiE4ZDnlFkO3T4lsYdSaRGsHed8tf2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284532 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284539 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284546 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284553 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284560 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.284567 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284583 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284590 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284601 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284608 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--33d58ee2--4c10--58b1--ba9c--becc4d68c01c-osd--block--33d58ee2--4c10--58b1--ba9c--becc4d68c01c', 'dm-uuid-LVM-b2DUe6pPjWw4q9EUJVUjIvE3Me0qGzC9JNcGY7fMyv8yeJzKZcRP1q95YMxjL7oH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284619 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ce206cb-87d4-44fa-8b19-ffddc5f2b300', 'scsi-SQEMU_QEMU_HARDDISK_7ce206cb-87d4-44fa-8b19-ffddc5f2b300'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ce206cb-87d4-44fa-8b19-ffddc5f2b300-part1', 'scsi-SQEMU_QEMU_HARDDISK_7ce206cb-87d4-44fa-8b19-ffddc5f2b300-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ce206cb-87d4-44fa-8b19-ffddc5f2b300-part14', 'scsi-SQEMU_QEMU_HARDDISK_7ce206cb-87d4-44fa-8b19-ffddc5f2b300-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ce206cb-87d4-44fa-8b19-ffddc5f2b300-part15', 'scsi-SQEMU_QEMU_HARDDISK_7ce206cb-87d4-44fa-8b19-ffddc5f2b300-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ce206cb-87d4-44fa-8b19-ffddc5f2b300-part16', 'scsi-SQEMU_QEMU_HARDDISK_7ce206cb-87d4-44fa-8b19-ffddc5f2b300-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284634 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a4a4ffc0--4b1a--5123--a777--2de0f9f46a6b-osd--block--a4a4ffc0--4b1a--5123--a777--2de0f9f46a6b', 'dm-uuid-LVM-oXW0HnudB9NGFV2CziApkCUlse954NVKg0dAUucQMXIjGY5IE8PcBcxv61Xaa3tO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284642 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--de836c00--0412--5e15--aa8a--abef9bebfb26-osd--block--de836c00--0412--5e15--aa8a--abef9bebfb26'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PWQM6B-jy51-yHR4-Xcur-JWGt-c4rk-j5fZG9', 'scsi-0QEMU_QEMU_HARDDISK_37a5ef51-3790-4474-9294-da6668d88e33', 'scsi-SQEMU_QEMU_HARDDISK_37a5ef51-3790-4474-9294-da6668d88e33'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284649 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284656 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--c404b240--9cf0--5c0e--97ba--c570a8ba4cd9-osd--block--c404b240--9cf0--5c0e--97ba--c570a8ba4cd9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cYdt4D-B5Wq-Mjwb-9Ydz-e3BM-44vE-VXd1px', 'scsi-0QEMU_QEMU_HARDDISK_8b34934e-11eb-4c36-8207-511a42fe0f38', 'scsi-SQEMU_QEMU_HARDDISK_8b34934e-11eb-4c36-8207-511a42fe0f38'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284670 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284680 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d22e3547-dc50-4b67-b48e-5886da7d5148', 'scsi-SQEMU_QEMU_HARDDISK_d22e3547-dc50-4b67-b48e-5886da7d5148'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284688 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284695 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-16-53-28-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284702 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284712 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.284719 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284729 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284737 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284750 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284758 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33e5408c-eac4-45cf-8284-ea43471071f8', 'scsi-SQEMU_QEMU_HARDDISK_33e5408c-eac4-45cf-8284-ea43471071f8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33e5408c-eac4-45cf-8284-ea43471071f8-part1', 'scsi-SQEMU_QEMU_HARDDISK_33e5408c-eac4-45cf-8284-ea43471071f8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33e5408c-eac4-45cf-8284-ea43471071f8-part14', 'scsi-SQEMU_QEMU_HARDDISK_33e5408c-eac4-45cf-8284-ea43471071f8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33e5408c-eac4-45cf-8284-ea43471071f8-part15', 'scsi-SQEMU_QEMU_HARDDISK_33e5408c-eac4-45cf-8284-ea43471071f8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33e5408c-eac4-45cf-8284-ea43471071f8-part16', 'scsi-SQEMU_QEMU_HARDDISK_33e5408c-eac4-45cf-8284-ea43471071f8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284774 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--33d58ee2--4c10--58b1--ba9c--becc4d68c01c-osd--block--33d58ee2--4c10--58b1--ba9c--becc4d68c01c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gjnCCA-r0Z1-l49w-WvEU-R0jc-GTtC-9JSoTT', 'scsi-0QEMU_QEMU_HARDDISK_cc6b7f8a-a299-449d-8912-3815da19ff1f', 'scsi-SQEMU_QEMU_HARDDISK_cc6b7f8a-a299-449d-8912-3815da19ff1f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284786 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--a4a4ffc0--4b1a--5123--a777--2de0f9f46a6b-osd--block--a4a4ffc0--4b1a--5123--a777--2de0f9f46a6b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vYaIGf-yEgl-Ymyy-5uFH-5UfI-zYmZ-prR9B8', 'scsi-0QEMU_QEMU_HARDDISK_fb369b5e-a271-4fa4-9f85-1311171daecb', 'scsi-SQEMU_QEMU_HARDDISK_fb369b5e-a271-4fa4-9f85-1311171daecb'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284795 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f5db02e-386c-41b9-ae07-b7cce6e0964a', 'scsi-SQEMU_QEMU_HARDDISK_6f5db02e-386c-41b9-ae07-b7cce6e0964a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284803 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-16-53-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:43:59.284814 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.284823 | orchestrator | 2025-06-02 17:43:59.284831 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-06-02 17:43:59.284838 | orchestrator | Monday 02 June 2025 17:33:16 +0000 (0:00:01.542) 0:00:34.780 *********** 2025-06-02 17:43:59.284845 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.284852 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.284858 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.284865 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.284872 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.284878 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.284885 | orchestrator | 2025-06-02 17:43:59.284891 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-06-02 17:43:59.284898 | orchestrator | Monday 02 June 2025 17:33:18 +0000 (0:00:01.983) 0:00:36.763 *********** 2025-06-02 17:43:59.284904 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.284911 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.284917 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.284924 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.284930 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.284937 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.284943 | orchestrator | 2025-06-02 17:43:59.284953 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-02 17:43:59.284960 | orchestrator | Monday 02 June 2025 17:33:18 +0000 (0:00:00.488) 0:00:37.252 *********** 2025-06-02 17:43:59.284967 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.284973 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.284980 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.284986 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.284993 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.284999 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.285006 | orchestrator | 2025-06-02 17:43:59.285012 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-02 17:43:59.285079 | orchestrator | Monday 02 June 2025 17:33:19 +0000 (0:00:00.646) 0:00:37.899 *********** 2025-06-02 17:43:59.285087 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.285093 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.285100 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.285106 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.285113 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.285120 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.285126 | orchestrator | 2025-06-02 17:43:59.285133 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-02 17:43:59.285140 | orchestrator | Monday 02 June 2025 17:33:19 +0000 (0:00:00.632) 0:00:38.531 *********** 2025-06-02 17:43:59.285146 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.285153 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.285159 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.285166 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.285173 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.285179 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.285186 | orchestrator | 2025-06-02 17:43:59.285197 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-02 17:43:59.285205 | orchestrator | Monday 02 June 2025 17:33:20 +0000 (0:00:01.118) 0:00:39.649 *********** 2025-06-02 17:43:59.285211 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.285218 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.285225 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.285231 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.285243 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.285250 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.285256 | orchestrator | 2025-06-02 17:43:59.285263 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-06-02 17:43:59.285270 | orchestrator | Monday 02 June 2025 17:33:21 +0000 (0:00:00.577) 0:00:40.227 *********** 2025-06-02 17:43:59.285276 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 17:43:59.285283 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-06-02 17:43:59.285290 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-02 17:43:59.285296 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-06-02 17:43:59.285302 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-06-02 17:43:59.285309 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-06-02 17:43:59.285315 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-06-02 17:43:59.285321 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-02 17:43:59.285327 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-06-02 17:43:59.285333 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-06-02 17:43:59.285339 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-06-02 17:43:59.285345 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-06-02 17:43:59.285351 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-06-02 17:43:59.285357 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-06-02 17:43:59.285364 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-06-02 17:43:59.285370 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-06-02 17:43:59.285376 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-06-02 17:43:59.285382 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-06-02 17:43:59.285388 | orchestrator | 2025-06-02 17:43:59.285394 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-06-02 17:43:59.285400 | orchestrator | Monday 02 June 2025 17:33:24 +0000 (0:00:03.262) 0:00:43.489 *********** 2025-06-02 17:43:59.285407 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-02 17:43:59.285413 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-02 17:43:59.285419 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-02 17:43:59.285425 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.285431 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-06-02 17:43:59.285438 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-06-02 17:43:59.285444 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-06-02 17:43:59.285450 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.285456 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-06-02 17:43:59.285462 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-06-02 17:43:59.285468 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-06-02 17:43:59.285474 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.285480 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-02 17:43:59.285486 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-02 17:43:59.285492 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-02 17:43:59.285499 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.285505 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-02 17:43:59.285511 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-02 17:43:59.285517 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-02 17:43:59.285523 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.285529 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-02 17:43:59.285539 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-02 17:43:59.285549 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-02 17:43:59.285555 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.285561 | orchestrator | 2025-06-02 17:43:59.285567 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-06-02 17:43:59.285574 | orchestrator | Monday 02 June 2025 17:33:25 +0000 (0:00:00.885) 0:00:44.374 *********** 2025-06-02 17:43:59.285580 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.285586 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.285592 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.285598 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:43:59.285605 | orchestrator | 2025-06-02 17:43:59.285611 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-02 17:43:59.285618 | orchestrator | Monday 02 June 2025 17:33:27 +0000 (0:00:01.527) 0:00:45.902 *********** 2025-06-02 17:43:59.285625 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.285631 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.285637 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.285643 | orchestrator | 2025-06-02 17:43:59.285649 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-02 17:43:59.285655 | orchestrator | Monday 02 June 2025 17:33:27 +0000 (0:00:00.486) 0:00:46.389 *********** 2025-06-02 17:43:59.285661 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.285667 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.285676 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.285683 | orchestrator | 2025-06-02 17:43:59.285689 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-02 17:43:59.285695 | orchestrator | Monday 02 June 2025 17:33:28 +0000 (0:00:00.517) 0:00:46.906 *********** 2025-06-02 17:43:59.285701 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.285707 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.285713 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.285719 | orchestrator | 2025-06-02 17:43:59.285726 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-02 17:43:59.285732 | orchestrator | Monday 02 June 2025 17:33:28 +0000 (0:00:00.309) 0:00:47.216 *********** 2025-06-02 17:43:59.285738 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.285744 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.285750 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.285760 | orchestrator | 2025-06-02 17:43:59.285769 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-02 17:43:59.285776 | orchestrator | Monday 02 June 2025 17:33:29 +0000 (0:00:00.734) 0:00:47.951 *********** 2025-06-02 17:43:59.285782 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 17:43:59.285788 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 17:43:59.285794 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 17:43:59.285800 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.285807 | orchestrator | 2025-06-02 17:43:59.285813 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-02 17:43:59.285819 | orchestrator | Monday 02 June 2025 17:33:29 +0000 (0:00:00.394) 0:00:48.346 *********** 2025-06-02 17:43:59.285825 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 17:43:59.285831 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 17:43:59.285837 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 17:43:59.285844 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.285850 | orchestrator | 2025-06-02 17:43:59.285856 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-02 17:43:59.285862 | orchestrator | Monday 02 June 2025 17:33:30 +0000 (0:00:00.459) 0:00:48.805 *********** 2025-06-02 17:43:59.285868 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 17:43:59.285879 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 17:43:59.285885 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 17:43:59.285891 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.285897 | orchestrator | 2025-06-02 17:43:59.285903 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-02 17:43:59.285910 | orchestrator | Monday 02 June 2025 17:33:30 +0000 (0:00:00.607) 0:00:49.413 *********** 2025-06-02 17:43:59.285916 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.285922 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.285928 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.285935 | orchestrator | 2025-06-02 17:43:59.285941 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-02 17:43:59.285947 | orchestrator | Monday 02 June 2025 17:33:31 +0000 (0:00:00.742) 0:00:50.156 *********** 2025-06-02 17:43:59.285953 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-02 17:43:59.285959 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-02 17:43:59.285965 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-02 17:43:59.285972 | orchestrator | 2025-06-02 17:43:59.285978 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-06-02 17:43:59.285984 | orchestrator | Monday 02 June 2025 17:33:32 +0000 (0:00:01.143) 0:00:51.299 *********** 2025-06-02 17:43:59.285990 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 17:43:59.285996 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 17:43:59.286002 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 17:43:59.286009 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-06-02 17:43:59.286073 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-02 17:43:59.286086 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-02 17:43:59.286092 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-02 17:43:59.286099 | orchestrator | 2025-06-02 17:43:59.286105 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-06-02 17:43:59.286112 | orchestrator | Monday 02 June 2025 17:33:33 +0000 (0:00:01.076) 0:00:52.375 *********** 2025-06-02 17:43:59.286118 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 17:43:59.286124 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 17:43:59.286130 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 17:43:59.286136 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-06-02 17:43:59.286143 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-02 17:43:59.286149 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-02 17:43:59.286155 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-02 17:43:59.286161 | orchestrator | 2025-06-02 17:43:59.286167 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-02 17:43:59.286173 | orchestrator | Monday 02 June 2025 17:33:36 +0000 (0:00:02.366) 0:00:54.742 *********** 2025-06-02 17:43:59.286184 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:43:59.286192 | orchestrator | 2025-06-02 17:43:59.286198 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-02 17:43:59.286204 | orchestrator | Monday 02 June 2025 17:33:37 +0000 (0:00:01.244) 0:00:55.986 *********** 2025-06-02 17:43:59.286211 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:43:59.286222 | orchestrator | 2025-06-02 17:43:59.286228 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-02 17:43:59.286234 | orchestrator | Monday 02 June 2025 17:33:38 +0000 (0:00:01.173) 0:00:57.160 *********** 2025-06-02 17:43:59.286240 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.286247 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.286253 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.286259 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.286265 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.286271 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.286277 | orchestrator | 2025-06-02 17:43:59.286284 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-02 17:43:59.286290 | orchestrator | Monday 02 June 2025 17:33:39 +0000 (0:00:00.844) 0:00:58.005 *********** 2025-06-02 17:43:59.286296 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.286302 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.286309 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.286315 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.286321 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.286327 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.286333 | orchestrator | 2025-06-02 17:43:59.286340 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-02 17:43:59.286346 | orchestrator | Monday 02 June 2025 17:33:40 +0000 (0:00:01.507) 0:00:59.512 *********** 2025-06-02 17:43:59.286352 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.286358 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.286364 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.286371 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.286377 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.286383 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.286389 | orchestrator | 2025-06-02 17:43:59.286396 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-02 17:43:59.286402 | orchestrator | Monday 02 June 2025 17:33:42 +0000 (0:00:01.379) 0:01:00.891 *********** 2025-06-02 17:43:59.286408 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.286414 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.286420 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.286427 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.286433 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.286439 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.286445 | orchestrator | 2025-06-02 17:43:59.286451 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-02 17:43:59.286457 | orchestrator | Monday 02 June 2025 17:33:43 +0000 (0:00:01.199) 0:01:02.091 *********** 2025-06-02 17:43:59.286464 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.286470 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.286476 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.286482 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.286488 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.286494 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.286500 | orchestrator | 2025-06-02 17:43:59.286506 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-02 17:43:59.286513 | orchestrator | Monday 02 June 2025 17:33:44 +0000 (0:00:01.127) 0:01:03.219 *********** 2025-06-02 17:43:59.286519 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.286525 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.286531 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.286537 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.286543 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.286549 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.286555 | orchestrator | 2025-06-02 17:43:59.286562 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-02 17:43:59.286574 | orchestrator | Monday 02 June 2025 17:33:45 +0000 (0:00:00.807) 0:01:04.026 *********** 2025-06-02 17:43:59.286580 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.286586 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.286593 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.286602 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.286608 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.286614 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.286620 | orchestrator | 2025-06-02 17:43:59.286627 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-02 17:43:59.286633 | orchestrator | Monday 02 June 2025 17:33:46 +0000 (0:00:00.890) 0:01:04.917 *********** 2025-06-02 17:43:59.286639 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.286645 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.286651 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.286658 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.286664 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.286670 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.286676 | orchestrator | 2025-06-02 17:43:59.286682 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-02 17:43:59.286689 | orchestrator | Monday 02 June 2025 17:33:48 +0000 (0:00:01.811) 0:01:06.728 *********** 2025-06-02 17:43:59.286695 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.286701 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.286707 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.286713 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.286719 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.286725 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.286731 | orchestrator | 2025-06-02 17:43:59.286737 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-02 17:43:59.286744 | orchestrator | Monday 02 June 2025 17:33:49 +0000 (0:00:01.542) 0:01:08.271 *********** 2025-06-02 17:43:59.286750 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.286756 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.286762 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.286768 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.286785 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.286792 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.286798 | orchestrator | 2025-06-02 17:43:59.286804 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-02 17:43:59.286811 | orchestrator | Monday 02 June 2025 17:33:50 +0000 (0:00:00.768) 0:01:09.039 *********** 2025-06-02 17:43:59.286817 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.286823 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.286829 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.286835 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.286841 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.286848 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.286854 | orchestrator | 2025-06-02 17:43:59.286860 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-02 17:43:59.286866 | orchestrator | Monday 02 June 2025 17:33:51 +0000 (0:00:01.214) 0:01:10.254 *********** 2025-06-02 17:43:59.286873 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.286879 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.286885 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.286891 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.286897 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.286904 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.286910 | orchestrator | 2025-06-02 17:43:59.286916 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-02 17:43:59.286922 | orchestrator | Monday 02 June 2025 17:33:52 +0000 (0:00:00.931) 0:01:11.185 *********** 2025-06-02 17:43:59.286929 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.286935 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.286941 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.286951 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.286958 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.286964 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.286970 | orchestrator | 2025-06-02 17:43:59.286976 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-02 17:43:59.286982 | orchestrator | Monday 02 June 2025 17:33:53 +0000 (0:00:00.972) 0:01:12.157 *********** 2025-06-02 17:43:59.286988 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.286994 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.287001 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.287007 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.287013 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.287038 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.287047 | orchestrator | 2025-06-02 17:43:59.287056 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-02 17:43:59.287067 | orchestrator | Monday 02 June 2025 17:33:54 +0000 (0:00:00.641) 0:01:12.799 *********** 2025-06-02 17:43:59.287077 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.287087 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.287097 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.287107 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.287113 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.287119 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.287125 | orchestrator | 2025-06-02 17:43:59.287132 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-02 17:43:59.287138 | orchestrator | Monday 02 June 2025 17:33:54 +0000 (0:00:00.806) 0:01:13.605 *********** 2025-06-02 17:43:59.287144 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.287150 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.287156 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.287162 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.287168 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.287174 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.287180 | orchestrator | 2025-06-02 17:43:59.287186 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-02 17:43:59.287192 | orchestrator | Monday 02 June 2025 17:33:55 +0000 (0:00:00.613) 0:01:14.219 *********** 2025-06-02 17:43:59.287199 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.287205 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.287211 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.287217 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.287223 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.287229 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.287235 | orchestrator | 2025-06-02 17:43:59.287241 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-02 17:43:59.287248 | orchestrator | Monday 02 June 2025 17:33:56 +0000 (0:00:00.882) 0:01:15.101 *********** 2025-06-02 17:43:59.287254 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.287260 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.287269 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.287276 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.287282 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.287288 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.287294 | orchestrator | 2025-06-02 17:43:59.287300 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-02 17:43:59.287306 | orchestrator | Monday 02 June 2025 17:33:57 +0000 (0:00:00.621) 0:01:15.723 *********** 2025-06-02 17:43:59.287312 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.287318 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.287325 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.287331 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.287337 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.287343 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.287349 | orchestrator | 2025-06-02 17:43:59.287355 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-06-02 17:43:59.287366 | orchestrator | Monday 02 June 2025 17:33:58 +0000 (0:00:01.216) 0:01:16.939 *********** 2025-06-02 17:43:59.287372 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:59.287378 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:59.287384 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:59.287390 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:43:59.287396 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:43:59.287402 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:43:59.287408 | orchestrator | 2025-06-02 17:43:59.287414 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-06-02 17:43:59.287420 | orchestrator | Monday 02 June 2025 17:33:59 +0000 (0:00:01.737) 0:01:18.676 *********** 2025-06-02 17:43:59.287427 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:59.287433 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:43:59.287439 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:59.287450 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:59.287457 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:43:59.287463 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:43:59.287469 | orchestrator | 2025-06-02 17:43:59.287475 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-06-02 17:43:59.287481 | orchestrator | Monday 02 June 2025 17:34:02 +0000 (0:00:02.073) 0:01:20.750 *********** 2025-06-02 17:43:59.287488 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:43:59.287494 | orchestrator | 2025-06-02 17:43:59.287500 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-06-02 17:43:59.287507 | orchestrator | Monday 02 June 2025 17:34:03 +0000 (0:00:01.229) 0:01:21.980 *********** 2025-06-02 17:43:59.287513 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.287519 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.287525 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.287531 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.287537 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.287543 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.287549 | orchestrator | 2025-06-02 17:43:59.287555 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-06-02 17:43:59.287561 | orchestrator | Monday 02 June 2025 17:34:04 +0000 (0:00:00.815) 0:01:22.796 *********** 2025-06-02 17:43:59.287567 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.287573 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.287580 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.287586 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.287592 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.287598 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.287604 | orchestrator | 2025-06-02 17:43:59.287610 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-06-02 17:43:59.287616 | orchestrator | Monday 02 June 2025 17:34:04 +0000 (0:00:00.613) 0:01:23.409 *********** 2025-06-02 17:43:59.287623 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-02 17:43:59.287629 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-02 17:43:59.287635 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-02 17:43:59.287641 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-02 17:43:59.287647 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-02 17:43:59.287653 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-02 17:43:59.287659 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-02 17:43:59.287666 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-02 17:43:59.287680 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-02 17:43:59.287686 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-02 17:43:59.287692 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-02 17:43:59.287698 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-02 17:43:59.287704 | orchestrator | 2025-06-02 17:43:59.287711 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-06-02 17:43:59.287717 | orchestrator | Monday 02 June 2025 17:34:06 +0000 (0:00:01.680) 0:01:25.090 *********** 2025-06-02 17:43:59.287723 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:59.287729 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:59.287735 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:43:59.287741 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:59.287747 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:43:59.287753 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:43:59.287759 | orchestrator | 2025-06-02 17:43:59.287765 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-06-02 17:43:59.287775 | orchestrator | Monday 02 June 2025 17:34:07 +0000 (0:00:00.842) 0:01:25.932 *********** 2025-06-02 17:43:59.287781 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.287787 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.287793 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.287799 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.287805 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.287811 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.287818 | orchestrator | 2025-06-02 17:43:59.287824 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-06-02 17:43:59.287830 | orchestrator | Monday 02 June 2025 17:34:08 +0000 (0:00:00.865) 0:01:26.797 *********** 2025-06-02 17:43:59.287836 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.287842 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.287848 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.287854 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.287860 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.287866 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.287872 | orchestrator | 2025-06-02 17:43:59.287878 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-06-02 17:43:59.287885 | orchestrator | Monday 02 June 2025 17:34:08 +0000 (0:00:00.660) 0:01:27.458 *********** 2025-06-02 17:43:59.287891 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.287897 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.287903 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.287909 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.287915 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.287921 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.287927 | orchestrator | 2025-06-02 17:43:59.287934 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-06-02 17:43:59.287944 | orchestrator | Monday 02 June 2025 17:34:09 +0000 (0:00:00.856) 0:01:28.315 *********** 2025-06-02 17:43:59.287951 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:43:59.287957 | orchestrator | 2025-06-02 17:43:59.287963 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-06-02 17:43:59.287969 | orchestrator | Monday 02 June 2025 17:34:10 +0000 (0:00:01.180) 0:01:29.495 *********** 2025-06-02 17:43:59.287976 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.287982 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.287988 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.287994 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.288004 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.288011 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.288035 | orchestrator | 2025-06-02 17:43:59.288043 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-06-02 17:43:59.288049 | orchestrator | Monday 02 June 2025 17:35:07 +0000 (0:00:56.596) 0:02:26.092 *********** 2025-06-02 17:43:59.288055 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-02 17:43:59.288061 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-02 17:43:59.288067 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-02 17:43:59.288074 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.288080 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-02 17:43:59.288086 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-02 17:43:59.288092 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-02 17:43:59.288098 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.288104 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-02 17:43:59.288111 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-02 17:43:59.288117 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-02 17:43:59.288123 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.288129 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-02 17:43:59.288135 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-02 17:43:59.288141 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-02 17:43:59.288147 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.288153 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-02 17:43:59.288159 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-02 17:43:59.288165 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-02 17:43:59.288172 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.288178 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-02 17:43:59.288184 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-02 17:43:59.288190 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-02 17:43:59.288196 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.288202 | orchestrator | 2025-06-02 17:43:59.288208 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-06-02 17:43:59.288214 | orchestrator | Monday 02 June 2025 17:35:08 +0000 (0:00:01.386) 0:02:27.479 *********** 2025-06-02 17:43:59.288220 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.288227 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.288233 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.288239 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.288245 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.288251 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.288257 | orchestrator | 2025-06-02 17:43:59.288266 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-06-02 17:43:59.288272 | orchestrator | Monday 02 June 2025 17:35:09 +0000 (0:00:00.758) 0:02:28.238 *********** 2025-06-02 17:43:59.288279 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.288285 | orchestrator | 2025-06-02 17:43:59.288291 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-06-02 17:43:59.288297 | orchestrator | Monday 02 June 2025 17:35:09 +0000 (0:00:00.196) 0:02:28.435 *********** 2025-06-02 17:43:59.288303 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.288314 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.288320 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.288326 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.288333 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.288339 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.288345 | orchestrator | 2025-06-02 17:43:59.288351 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-06-02 17:43:59.288357 | orchestrator | Monday 02 June 2025 17:35:11 +0000 (0:00:01.376) 0:02:29.811 *********** 2025-06-02 17:43:59.288364 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.288370 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.288376 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.288382 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.288388 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.288394 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.288400 | orchestrator | 2025-06-02 17:43:59.288406 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-06-02 17:43:59.288412 | orchestrator | Monday 02 June 2025 17:35:12 +0000 (0:00:00.920) 0:02:30.732 *********** 2025-06-02 17:43:59.288418 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.288424 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.288435 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.288441 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.288447 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.288453 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.288459 | orchestrator | 2025-06-02 17:43:59.288465 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-06-02 17:43:59.288471 | orchestrator | Monday 02 June 2025 17:35:13 +0000 (0:00:01.154) 0:02:31.887 *********** 2025-06-02 17:43:59.288477 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.288483 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.288489 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.288496 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.288502 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.288508 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.288514 | orchestrator | 2025-06-02 17:43:59.288520 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-06-02 17:43:59.288526 | orchestrator | Monday 02 June 2025 17:35:15 +0000 (0:00:02.181) 0:02:34.068 *********** 2025-06-02 17:43:59.288532 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.288538 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.288544 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.288550 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.288556 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.288562 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.288568 | orchestrator | 2025-06-02 17:43:59.288574 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-06-02 17:43:59.288581 | orchestrator | Monday 02 June 2025 17:35:16 +0000 (0:00:01.110) 0:02:35.179 *********** 2025-06-02 17:43:59.288587 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:43:59.288594 | orchestrator | 2025-06-02 17:43:59.288600 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-06-02 17:43:59.288606 | orchestrator | Monday 02 June 2025 17:35:17 +0000 (0:00:01.269) 0:02:36.448 *********** 2025-06-02 17:43:59.288612 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.288618 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.288625 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.288631 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.288637 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.288643 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.288649 | orchestrator | 2025-06-02 17:43:59.288655 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-06-02 17:43:59.288665 | orchestrator | Monday 02 June 2025 17:35:18 +0000 (0:00:00.868) 0:02:37.317 *********** 2025-06-02 17:43:59.288671 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.288677 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.288683 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.288689 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.288695 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.288702 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.288708 | orchestrator | 2025-06-02 17:43:59.288714 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-06-02 17:43:59.288720 | orchestrator | Monday 02 June 2025 17:35:19 +0000 (0:00:00.943) 0:02:38.261 *********** 2025-06-02 17:43:59.288726 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.288732 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.288738 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.288744 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.288750 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.288756 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.288763 | orchestrator | 2025-06-02 17:43:59.288769 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-06-02 17:43:59.288775 | orchestrator | Monday 02 June 2025 17:35:20 +0000 (0:00:00.676) 0:02:38.938 *********** 2025-06-02 17:43:59.288781 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.288787 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.288793 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.288799 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.288805 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.288811 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.288817 | orchestrator | 2025-06-02 17:43:59.288823 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-06-02 17:43:59.288829 | orchestrator | Monday 02 June 2025 17:35:21 +0000 (0:00:00.820) 0:02:39.758 *********** 2025-06-02 17:43:59.288839 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.288845 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.288851 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.288857 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.288863 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.288869 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.288875 | orchestrator | 2025-06-02 17:43:59.288881 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-06-02 17:43:59.288887 | orchestrator | Monday 02 June 2025 17:35:21 +0000 (0:00:00.669) 0:02:40.428 *********** 2025-06-02 17:43:59.288893 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.288899 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.288905 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.288911 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.288917 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.288923 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.288929 | orchestrator | 2025-06-02 17:43:59.288936 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-06-02 17:43:59.288942 | orchestrator | Monday 02 June 2025 17:35:22 +0000 (0:00:01.117) 0:02:41.546 *********** 2025-06-02 17:43:59.288948 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.288954 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.288960 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.288966 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.288972 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.288978 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.288984 | orchestrator | 2025-06-02 17:43:59.288990 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-06-02 17:43:59.288996 | orchestrator | Monday 02 June 2025 17:35:23 +0000 (0:00:00.935) 0:02:42.481 *********** 2025-06-02 17:43:59.289006 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.289031 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.289037 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.289043 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.289050 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.289056 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.289062 | orchestrator | 2025-06-02 17:43:59.289068 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-06-02 17:43:59.289074 | orchestrator | Monday 02 June 2025 17:35:24 +0000 (0:00:01.147) 0:02:43.628 *********** 2025-06-02 17:43:59.289081 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.289087 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.289093 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.289099 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.289105 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.289111 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.289117 | orchestrator | 2025-06-02 17:43:59.289123 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-06-02 17:43:59.289129 | orchestrator | Monday 02 June 2025 17:35:26 +0000 (0:00:01.581) 0:02:45.210 *********** 2025-06-02 17:43:59.289136 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:43:59.289142 | orchestrator | 2025-06-02 17:43:59.289148 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-06-02 17:43:59.289154 | orchestrator | Monday 02 June 2025 17:35:27 +0000 (0:00:01.309) 0:02:46.519 *********** 2025-06-02 17:43:59.289160 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-06-02 17:43:59.289166 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-06-02 17:43:59.289172 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-06-02 17:43:59.289179 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-06-02 17:43:59.289185 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-06-02 17:43:59.289191 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-06-02 17:43:59.289197 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-06-02 17:43:59.289203 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-06-02 17:43:59.289209 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-06-02 17:43:59.289215 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-06-02 17:43:59.289221 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-06-02 17:43:59.289227 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-06-02 17:43:59.289233 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-06-02 17:43:59.289239 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-06-02 17:43:59.289246 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-06-02 17:43:59.289252 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-06-02 17:43:59.289258 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-06-02 17:43:59.289264 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-06-02 17:43:59.289270 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-06-02 17:43:59.289276 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-06-02 17:43:59.289282 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-06-02 17:43:59.289288 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-06-02 17:43:59.289294 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-06-02 17:43:59.289300 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-06-02 17:43:59.289306 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-06-02 17:43:59.289312 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-06-02 17:43:59.289318 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-06-02 17:43:59.289331 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-06-02 17:43:59.289337 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-06-02 17:43:59.289346 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-06-02 17:43:59.289353 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-06-02 17:43:59.289359 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-06-02 17:43:59.289365 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-06-02 17:43:59.289371 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-06-02 17:43:59.289377 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-06-02 17:43:59.289383 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-06-02 17:43:59.289389 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-06-02 17:43:59.289395 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-06-02 17:43:59.289401 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-06-02 17:43:59.289407 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-06-02 17:43:59.289413 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-06-02 17:43:59.289419 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-06-02 17:43:59.289425 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-06-02 17:43:59.289431 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-06-02 17:43:59.289437 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-06-02 17:43:59.289444 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-06-02 17:43:59.289454 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-02 17:43:59.289460 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-02 17:43:59.289466 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-06-02 17:43:59.289472 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-02 17:43:59.289478 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-06-02 17:43:59.289485 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-02 17:43:59.289491 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-02 17:43:59.289497 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-02 17:43:59.289503 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-02 17:43:59.289509 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-02 17:43:59.289515 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-02 17:43:59.289521 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-02 17:43:59.289527 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-02 17:43:59.289533 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-02 17:43:59.289539 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-02 17:43:59.289545 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-02 17:43:59.289551 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-02 17:43:59.289558 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-02 17:43:59.289564 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-02 17:43:59.289570 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-02 17:43:59.289576 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-02 17:43:59.289582 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-02 17:43:59.289592 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-02 17:43:59.289599 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-02 17:43:59.289605 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-02 17:43:59.289616 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-02 17:43:59.289626 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-02 17:43:59.289636 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-02 17:43:59.289652 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-02 17:43:59.289663 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-02 17:43:59.289673 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-02 17:43:59.289683 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-02 17:43:59.289692 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-02 17:43:59.289701 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-02 17:43:59.289712 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-02 17:43:59.289721 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-06-02 17:43:59.289731 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-02 17:43:59.289742 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-06-02 17:43:59.289751 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-06-02 17:43:59.289761 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-02 17:43:59.289775 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-02 17:43:59.289785 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-06-02 17:43:59.289796 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-06-02 17:43:59.289807 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-06-02 17:43:59.289817 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-06-02 17:43:59.289827 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-06-02 17:43:59.289837 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-06-02 17:43:59.289848 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-06-02 17:43:59.289857 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-06-02 17:43:59.289864 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-06-02 17:43:59.289870 | orchestrator | 2025-06-02 17:43:59.289876 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-06-02 17:43:59.289882 | orchestrator | Monday 02 June 2025 17:35:34 +0000 (0:00:06.894) 0:02:53.413 *********** 2025-06-02 17:43:59.289888 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.289894 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.289900 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.289907 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:43:59.289913 | orchestrator | 2025-06-02 17:43:59.289919 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-06-02 17:43:59.289932 | orchestrator | Monday 02 June 2025 17:35:36 +0000 (0:00:01.377) 0:02:54.791 *********** 2025-06-02 17:43:59.289938 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-02 17:43:59.289945 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-02 17:43:59.289951 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-02 17:43:59.289964 | orchestrator | 2025-06-02 17:43:59.289970 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-06-02 17:43:59.289976 | orchestrator | Monday 02 June 2025 17:35:36 +0000 (0:00:00.800) 0:02:55.592 *********** 2025-06-02 17:43:59.289982 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-02 17:43:59.289989 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-02 17:43:59.289995 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-02 17:43:59.290001 | orchestrator | 2025-06-02 17:43:59.290007 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-06-02 17:43:59.290099 | orchestrator | Monday 02 June 2025 17:35:38 +0000 (0:00:01.797) 0:02:57.389 *********** 2025-06-02 17:43:59.290108 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.290115 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.290121 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.290128 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.290134 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.290141 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.290151 | orchestrator | 2025-06-02 17:43:59.290160 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-06-02 17:43:59.290169 | orchestrator | Monday 02 June 2025 17:35:39 +0000 (0:00:00.834) 0:02:58.223 *********** 2025-06-02 17:43:59.290184 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.290195 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.290205 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.290214 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.290223 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.290232 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.290241 | orchestrator | 2025-06-02 17:43:59.290252 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-06-02 17:43:59.290261 | orchestrator | Monday 02 June 2025 17:35:40 +0000 (0:00:01.096) 0:02:59.319 *********** 2025-06-02 17:43:59.290272 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.290283 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.290293 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.290302 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.290311 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.290319 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.290331 | orchestrator | 2025-06-02 17:43:59.290341 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-06-02 17:43:59.290352 | orchestrator | Monday 02 June 2025 17:35:41 +0000 (0:00:00.654) 0:02:59.974 *********** 2025-06-02 17:43:59.290362 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.290372 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.290383 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.290394 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.290404 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.290414 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.290424 | orchestrator | 2025-06-02 17:43:59.290430 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-06-02 17:43:59.290437 | orchestrator | Monday 02 June 2025 17:35:42 +0000 (0:00:00.794) 0:03:00.768 *********** 2025-06-02 17:43:59.290443 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.290449 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.290455 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.290462 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.290468 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.290479 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.290486 | orchestrator | 2025-06-02 17:43:59.290499 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-06-02 17:43:59.290505 | orchestrator | Monday 02 June 2025 17:35:42 +0000 (0:00:00.543) 0:03:01.311 *********** 2025-06-02 17:43:59.290512 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.290519 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.290530 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.290540 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.290550 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.290560 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.290570 | orchestrator | 2025-06-02 17:43:59.290579 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-06-02 17:43:59.290589 | orchestrator | Monday 02 June 2025 17:35:43 +0000 (0:00:00.684) 0:03:01.996 *********** 2025-06-02 17:43:59.290598 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.290607 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.290617 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.290625 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.290634 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.290644 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.290650 | orchestrator | 2025-06-02 17:43:59.290656 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-06-02 17:43:59.290661 | orchestrator | Monday 02 June 2025 17:35:43 +0000 (0:00:00.484) 0:03:02.480 *********** 2025-06-02 17:43:59.290666 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.290672 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.290677 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.290695 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.290701 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.290706 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.290712 | orchestrator | 2025-06-02 17:43:59.290717 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-06-02 17:43:59.290722 | orchestrator | Monday 02 June 2025 17:35:44 +0000 (0:00:00.641) 0:03:03.122 *********** 2025-06-02 17:43:59.290728 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.290733 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.290739 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.290744 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.290750 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.290755 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.290760 | orchestrator | 2025-06-02 17:43:59.290766 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-06-02 17:43:59.290771 | orchestrator | Monday 02 June 2025 17:35:48 +0000 (0:00:04.219) 0:03:07.341 *********** 2025-06-02 17:43:59.290776 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.290782 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.290787 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.290792 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.290798 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.290803 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.290808 | orchestrator | 2025-06-02 17:43:59.290814 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-06-02 17:43:59.290819 | orchestrator | Monday 02 June 2025 17:35:49 +0000 (0:00:01.099) 0:03:08.440 *********** 2025-06-02 17:43:59.290824 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.290830 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.290835 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.290841 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.290846 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.290851 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.290857 | orchestrator | 2025-06-02 17:43:59.290862 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-06-02 17:43:59.290867 | orchestrator | Monday 02 June 2025 17:35:50 +0000 (0:00:00.762) 0:03:09.203 *********** 2025-06-02 17:43:59.290878 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.290884 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.290889 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.290894 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.290900 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.290905 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.290910 | orchestrator | 2025-06-02 17:43:59.290915 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-06-02 17:43:59.290921 | orchestrator | Monday 02 June 2025 17:35:51 +0000 (0:00:00.951) 0:03:10.155 *********** 2025-06-02 17:43:59.290926 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.290931 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.290937 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.290942 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-02 17:43:59.290948 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-02 17:43:59.290954 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-02 17:43:59.290959 | orchestrator | 2025-06-02 17:43:59.290964 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-06-02 17:43:59.290970 | orchestrator | Monday 02 June 2025 17:35:52 +0000 (0:00:00.613) 0:03:10.769 *********** 2025-06-02 17:43:59.290975 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.290980 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.290986 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.290992 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-06-02 17:43:59.291003 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-06-02 17:43:59.291010 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.291015 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-06-02 17:43:59.291035 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-06-02 17:43:59.291040 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.291050 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-06-02 17:43:59.291056 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-06-02 17:43:59.291066 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.291071 | orchestrator | 2025-06-02 17:43:59.291077 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-06-02 17:43:59.291082 | orchestrator | Monday 02 June 2025 17:35:53 +0000 (0:00:01.062) 0:03:11.832 *********** 2025-06-02 17:43:59.291087 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.291093 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.291098 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.291103 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.291109 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.291114 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.291119 | orchestrator | 2025-06-02 17:43:59.291125 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-06-02 17:43:59.291130 | orchestrator | Monday 02 June 2025 17:35:53 +0000 (0:00:00.697) 0:03:12.529 *********** 2025-06-02 17:43:59.291135 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.291141 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.291146 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.291151 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.291157 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.291162 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.291167 | orchestrator | 2025-06-02 17:43:59.291173 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-02 17:43:59.291178 | orchestrator | Monday 02 June 2025 17:35:54 +0000 (0:00:01.112) 0:03:13.641 *********** 2025-06-02 17:43:59.291184 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.291189 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.291194 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.291200 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.291205 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.291210 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.291216 | orchestrator | 2025-06-02 17:43:59.291221 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-02 17:43:59.291227 | orchestrator | Monday 02 June 2025 17:35:55 +0000 (0:00:00.744) 0:03:14.386 *********** 2025-06-02 17:43:59.291232 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.291237 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.291243 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.291248 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.291254 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.291259 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.291266 | orchestrator | 2025-06-02 17:43:59.291272 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-02 17:43:59.291278 | orchestrator | Monday 02 June 2025 17:35:56 +0000 (0:00:01.005) 0:03:15.392 *********** 2025-06-02 17:43:59.291284 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.291290 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.291296 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.291302 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.291308 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.291314 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.291319 | orchestrator | 2025-06-02 17:43:59.291326 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-02 17:43:59.291332 | orchestrator | Monday 02 June 2025 17:35:57 +0000 (0:00:00.830) 0:03:16.222 *********** 2025-06-02 17:43:59.291338 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.291344 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.291349 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.291355 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.291362 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.291368 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.291374 | orchestrator | 2025-06-02 17:43:59.291384 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-02 17:43:59.291394 | orchestrator | Monday 02 June 2025 17:35:58 +0000 (0:00:00.874) 0:03:17.097 *********** 2025-06-02 17:43:59.291400 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-06-02 17:43:59.291407 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-06-02 17:43:59.291413 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-06-02 17:43:59.291419 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.291425 | orchestrator | 2025-06-02 17:43:59.291431 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-02 17:43:59.291437 | orchestrator | Monday 02 June 2025 17:35:58 +0000 (0:00:00.297) 0:03:17.395 *********** 2025-06-02 17:43:59.291442 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-06-02 17:43:59.291448 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-06-02 17:43:59.291453 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-06-02 17:43:59.291458 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.291464 | orchestrator | 2025-06-02 17:43:59.291469 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-02 17:43:59.291474 | orchestrator | Monday 02 June 2025 17:35:59 +0000 (0:00:00.370) 0:03:17.766 *********** 2025-06-02 17:43:59.291480 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-06-02 17:43:59.291485 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-06-02 17:43:59.291490 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-06-02 17:43:59.291496 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.291501 | orchestrator | 2025-06-02 17:43:59.291510 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-02 17:43:59.291516 | orchestrator | Monday 02 June 2025 17:35:59 +0000 (0:00:00.353) 0:03:18.120 *********** 2025-06-02 17:43:59.291521 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.291526 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.291532 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.291537 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.291542 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.291548 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.291553 | orchestrator | 2025-06-02 17:43:59.291559 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-02 17:43:59.291564 | orchestrator | Monday 02 June 2025 17:36:00 +0000 (0:00:00.589) 0:03:18.710 *********** 2025-06-02 17:43:59.291570 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-06-02 17:43:59.291575 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.291580 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-06-02 17:43:59.291586 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.291591 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-06-02 17:43:59.291596 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.291602 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-02 17:43:59.291607 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-02 17:43:59.291612 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-02 17:43:59.291618 | orchestrator | 2025-06-02 17:43:59.291623 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-06-02 17:43:59.291628 | orchestrator | Monday 02 June 2025 17:36:02 +0000 (0:00:02.200) 0:03:20.910 *********** 2025-06-02 17:43:59.291634 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:59.291639 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:59.291644 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:59.291650 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:43:59.291655 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:43:59.291660 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:43:59.291665 | orchestrator | 2025-06-02 17:43:59.291671 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-02 17:43:59.291676 | orchestrator | Monday 02 June 2025 17:36:04 +0000 (0:00:02.440) 0:03:23.351 *********** 2025-06-02 17:43:59.291686 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:59.291691 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:59.291697 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:59.291702 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:43:59.291707 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:43:59.291713 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:43:59.291718 | orchestrator | 2025-06-02 17:43:59.291724 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-06-02 17:43:59.291729 | orchestrator | Monday 02 June 2025 17:36:05 +0000 (0:00:01.239) 0:03:24.590 *********** 2025-06-02 17:43:59.291734 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.291740 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.291745 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.291751 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:43:59.291756 | orchestrator | 2025-06-02 17:43:59.291761 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-06-02 17:43:59.291767 | orchestrator | Monday 02 June 2025 17:36:07 +0000 (0:00:01.120) 0:03:25.710 *********** 2025-06-02 17:43:59.291772 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.291778 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.291783 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.291789 | orchestrator | 2025-06-02 17:43:59.291794 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-06-02 17:43:59.291800 | orchestrator | Monday 02 June 2025 17:36:07 +0000 (0:00:00.337) 0:03:26.047 *********** 2025-06-02 17:43:59.291805 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:59.291810 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:59.291816 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:59.291821 | orchestrator | 2025-06-02 17:43:59.291826 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-06-02 17:43:59.291832 | orchestrator | Monday 02 June 2025 17:36:08 +0000 (0:00:01.586) 0:03:27.633 *********** 2025-06-02 17:43:59.291837 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-02 17:43:59.291843 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-02 17:43:59.291851 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-02 17:43:59.291856 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.291862 | orchestrator | 2025-06-02 17:43:59.291867 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-06-02 17:43:59.291873 | orchestrator | Monday 02 June 2025 17:36:09 +0000 (0:00:00.658) 0:03:28.292 *********** 2025-06-02 17:43:59.291878 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.291883 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.291889 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.291894 | orchestrator | 2025-06-02 17:43:59.291899 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-06-02 17:43:59.291905 | orchestrator | Monday 02 June 2025 17:36:09 +0000 (0:00:00.325) 0:03:28.617 *********** 2025-06-02 17:43:59.291910 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.291915 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.291921 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.291926 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:43:59.291932 | orchestrator | 2025-06-02 17:43:59.291937 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-06-02 17:43:59.291942 | orchestrator | Monday 02 June 2025 17:36:11 +0000 (0:00:01.144) 0:03:29.762 *********** 2025-06-02 17:43:59.291948 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 17:43:59.291953 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 17:43:59.291958 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 17:43:59.291968 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.291973 | orchestrator | 2025-06-02 17:43:59.291982 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-06-02 17:43:59.291988 | orchestrator | Monday 02 June 2025 17:36:11 +0000 (0:00:00.394) 0:03:30.157 *********** 2025-06-02 17:43:59.291993 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.291998 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.292004 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.292009 | orchestrator | 2025-06-02 17:43:59.292015 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-06-02 17:43:59.292033 | orchestrator | Monday 02 June 2025 17:36:11 +0000 (0:00:00.405) 0:03:30.562 *********** 2025-06-02 17:43:59.292039 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.292044 | orchestrator | 2025-06-02 17:43:59.292049 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-06-02 17:43:59.292055 | orchestrator | Monday 02 June 2025 17:36:12 +0000 (0:00:00.242) 0:03:30.805 *********** 2025-06-02 17:43:59.292060 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.292065 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.292071 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.292076 | orchestrator | 2025-06-02 17:43:59.292081 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-06-02 17:43:59.292087 | orchestrator | Monday 02 June 2025 17:36:12 +0000 (0:00:00.324) 0:03:31.129 *********** 2025-06-02 17:43:59.292092 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.292098 | orchestrator | 2025-06-02 17:43:59.292103 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-06-02 17:43:59.292108 | orchestrator | Monday 02 June 2025 17:36:12 +0000 (0:00:00.207) 0:03:31.337 *********** 2025-06-02 17:43:59.292114 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.292120 | orchestrator | 2025-06-02 17:43:59.292125 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-06-02 17:43:59.292130 | orchestrator | Monday 02 June 2025 17:36:12 +0000 (0:00:00.205) 0:03:31.542 *********** 2025-06-02 17:43:59.292136 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.292141 | orchestrator | 2025-06-02 17:43:59.292146 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-06-02 17:43:59.292152 | orchestrator | Monday 02 June 2025 17:36:13 +0000 (0:00:00.401) 0:03:31.944 *********** 2025-06-02 17:43:59.292157 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.292163 | orchestrator | 2025-06-02 17:43:59.292168 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-06-02 17:43:59.292173 | orchestrator | Monday 02 June 2025 17:36:13 +0000 (0:00:00.249) 0:03:32.193 *********** 2025-06-02 17:43:59.292179 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.292184 | orchestrator | 2025-06-02 17:43:59.292190 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-06-02 17:43:59.292195 | orchestrator | Monday 02 June 2025 17:36:13 +0000 (0:00:00.244) 0:03:32.437 *********** 2025-06-02 17:43:59.292201 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 17:43:59.292206 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 17:43:59.292212 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 17:43:59.292217 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.292223 | orchestrator | 2025-06-02 17:43:59.292228 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-06-02 17:43:59.292233 | orchestrator | Monday 02 June 2025 17:36:14 +0000 (0:00:00.404) 0:03:32.842 *********** 2025-06-02 17:43:59.292239 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.292244 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.292249 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.292255 | orchestrator | 2025-06-02 17:43:59.292260 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-06-02 17:43:59.292265 | orchestrator | Monday 02 June 2025 17:36:14 +0000 (0:00:00.384) 0:03:33.226 *********** 2025-06-02 17:43:59.292280 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.292286 | orchestrator | 2025-06-02 17:43:59.292291 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-06-02 17:43:59.292296 | orchestrator | Monday 02 June 2025 17:36:14 +0000 (0:00:00.244) 0:03:33.471 *********** 2025-06-02 17:43:59.292302 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.292307 | orchestrator | 2025-06-02 17:43:59.292312 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-06-02 17:43:59.292321 | orchestrator | Monday 02 June 2025 17:36:15 +0000 (0:00:00.236) 0:03:33.707 *********** 2025-06-02 17:43:59.292326 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.292332 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.292337 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.292342 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:43:59.292348 | orchestrator | 2025-06-02 17:43:59.292353 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-06-02 17:43:59.292358 | orchestrator | Monday 02 June 2025 17:36:16 +0000 (0:00:01.127) 0:03:34.835 *********** 2025-06-02 17:43:59.292364 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.292369 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.292375 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.292380 | orchestrator | 2025-06-02 17:43:59.292385 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-06-02 17:43:59.292391 | orchestrator | Monday 02 June 2025 17:36:16 +0000 (0:00:00.360) 0:03:35.195 *********** 2025-06-02 17:43:59.292396 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:43:59.292402 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:43:59.292407 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:43:59.292412 | orchestrator | 2025-06-02 17:43:59.292418 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-06-02 17:43:59.292423 | orchestrator | Monday 02 June 2025 17:36:17 +0000 (0:00:01.284) 0:03:36.480 *********** 2025-06-02 17:43:59.292428 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 17:43:59.292434 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 17:43:59.292439 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 17:43:59.292448 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.292454 | orchestrator | 2025-06-02 17:43:59.292460 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-06-02 17:43:59.292465 | orchestrator | Monday 02 June 2025 17:36:19 +0000 (0:00:01.301) 0:03:37.781 *********** 2025-06-02 17:43:59.292470 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.292476 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.292481 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.292486 | orchestrator | 2025-06-02 17:43:59.292492 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-06-02 17:43:59.292497 | orchestrator | Monday 02 June 2025 17:36:19 +0000 (0:00:00.394) 0:03:38.176 *********** 2025-06-02 17:43:59.292503 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.292508 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.292513 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.292519 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:43:59.292524 | orchestrator | 2025-06-02 17:43:59.292529 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-06-02 17:43:59.292535 | orchestrator | Monday 02 June 2025 17:36:20 +0000 (0:00:01.085) 0:03:39.261 *********** 2025-06-02 17:43:59.292540 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.292546 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.292551 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.292556 | orchestrator | 2025-06-02 17:43:59.292562 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-06-02 17:43:59.292572 | orchestrator | Monday 02 June 2025 17:36:20 +0000 (0:00:00.402) 0:03:39.664 *********** 2025-06-02 17:43:59.292577 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:43:59.292583 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:43:59.292588 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:43:59.292593 | orchestrator | 2025-06-02 17:43:59.292599 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-06-02 17:43:59.292604 | orchestrator | Monday 02 June 2025 17:36:22 +0000 (0:00:01.329) 0:03:40.994 *********** 2025-06-02 17:43:59.292609 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 17:43:59.292615 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 17:43:59.292620 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 17:43:59.292625 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.292631 | orchestrator | 2025-06-02 17:43:59.292636 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-06-02 17:43:59.292642 | orchestrator | Monday 02 June 2025 17:36:23 +0000 (0:00:00.872) 0:03:41.866 *********** 2025-06-02 17:43:59.292647 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.292653 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.292658 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.292664 | orchestrator | 2025-06-02 17:43:59.292669 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-06-02 17:43:59.292674 | orchestrator | Monday 02 June 2025 17:36:23 +0000 (0:00:00.352) 0:03:42.218 *********** 2025-06-02 17:43:59.292680 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.292685 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.292690 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.292696 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.292701 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.292706 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.292712 | orchestrator | 2025-06-02 17:43:59.292717 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-06-02 17:43:59.292723 | orchestrator | Monday 02 June 2025 17:36:24 +0000 (0:00:00.853) 0:03:43.072 *********** 2025-06-02 17:43:59.292728 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.292733 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.292739 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.292744 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:43:59.292749 | orchestrator | 2025-06-02 17:43:59.292755 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-06-02 17:43:59.292760 | orchestrator | Monday 02 June 2025 17:36:25 +0000 (0:00:01.117) 0:03:44.189 *********** 2025-06-02 17:43:59.292765 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.292771 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.292776 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.292781 | orchestrator | 2025-06-02 17:43:59.292790 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-06-02 17:43:59.292796 | orchestrator | Monday 02 June 2025 17:36:25 +0000 (0:00:00.369) 0:03:44.558 *********** 2025-06-02 17:43:59.292801 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:59.292806 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:59.292812 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:59.292817 | orchestrator | 2025-06-02 17:43:59.292822 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-06-02 17:43:59.292828 | orchestrator | Monday 02 June 2025 17:36:27 +0000 (0:00:01.248) 0:03:45.806 *********** 2025-06-02 17:43:59.292833 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-02 17:43:59.292839 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-02 17:43:59.292844 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-02 17:43:59.292849 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.292859 | orchestrator | 2025-06-02 17:43:59.292865 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-06-02 17:43:59.292870 | orchestrator | Monday 02 June 2025 17:36:27 +0000 (0:00:00.832) 0:03:46.639 *********** 2025-06-02 17:43:59.292876 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.292881 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.292886 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.292892 | orchestrator | 2025-06-02 17:43:59.292897 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-06-02 17:43:59.292902 | orchestrator | 2025-06-02 17:43:59.292908 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-02 17:43:59.292913 | orchestrator | Monday 02 June 2025 17:36:28 +0000 (0:00:00.862) 0:03:47.502 *********** 2025-06-02 17:43:59.292923 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:43:59.292929 | orchestrator | 2025-06-02 17:43:59.292934 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-02 17:43:59.292939 | orchestrator | Monday 02 June 2025 17:36:29 +0000 (0:00:00.443) 0:03:47.946 *********** 2025-06-02 17:43:59.292945 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:43:59.292950 | orchestrator | 2025-06-02 17:43:59.292956 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-02 17:43:59.292961 | orchestrator | Monday 02 June 2025 17:36:29 +0000 (0:00:00.611) 0:03:48.557 *********** 2025-06-02 17:43:59.292967 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.292972 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.292978 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.292983 | orchestrator | 2025-06-02 17:43:59.292988 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-02 17:43:59.292994 | orchestrator | Monday 02 June 2025 17:36:30 +0000 (0:00:00.697) 0:03:49.255 *********** 2025-06-02 17:43:59.292999 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.293004 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.293010 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.293015 | orchestrator | 2025-06-02 17:43:59.293031 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-02 17:43:59.293037 | orchestrator | Monday 02 June 2025 17:36:30 +0000 (0:00:00.263) 0:03:49.519 *********** 2025-06-02 17:43:59.293042 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.293047 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.293053 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.293058 | orchestrator | 2025-06-02 17:43:59.293064 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-02 17:43:59.293069 | orchestrator | Monday 02 June 2025 17:36:31 +0000 (0:00:00.260) 0:03:49.779 *********** 2025-06-02 17:43:59.293075 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.293080 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.293085 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.293091 | orchestrator | 2025-06-02 17:43:59.293096 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-02 17:43:59.293102 | orchestrator | Monday 02 June 2025 17:36:31 +0000 (0:00:00.498) 0:03:50.277 *********** 2025-06-02 17:43:59.293107 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.293112 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.293118 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.293124 | orchestrator | 2025-06-02 17:43:59.293129 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-02 17:43:59.293134 | orchestrator | Monday 02 June 2025 17:36:32 +0000 (0:00:00.695) 0:03:50.973 *********** 2025-06-02 17:43:59.293140 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.293145 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.293151 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.293161 | orchestrator | 2025-06-02 17:43:59.293166 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-02 17:43:59.293171 | orchestrator | Monday 02 June 2025 17:36:32 +0000 (0:00:00.273) 0:03:51.247 *********** 2025-06-02 17:43:59.293177 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.293182 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.293188 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.293193 | orchestrator | 2025-06-02 17:43:59.293198 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-02 17:43:59.293204 | orchestrator | Monday 02 June 2025 17:36:32 +0000 (0:00:00.263) 0:03:51.511 *********** 2025-06-02 17:43:59.293209 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.293214 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.293220 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.293225 | orchestrator | 2025-06-02 17:43:59.293231 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-02 17:43:59.293236 | orchestrator | Monday 02 June 2025 17:36:33 +0000 (0:00:00.899) 0:03:52.410 *********** 2025-06-02 17:43:59.293241 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.293247 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.293252 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.293257 | orchestrator | 2025-06-02 17:43:59.293263 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-02 17:43:59.293271 | orchestrator | Monday 02 June 2025 17:36:34 +0000 (0:00:00.636) 0:03:53.046 *********** 2025-06-02 17:43:59.293277 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.293282 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.293288 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.293293 | orchestrator | 2025-06-02 17:43:59.293298 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-02 17:43:59.293304 | orchestrator | Monday 02 June 2025 17:36:34 +0000 (0:00:00.292) 0:03:53.339 *********** 2025-06-02 17:43:59.293309 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.293315 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.293320 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.293325 | orchestrator | 2025-06-02 17:43:59.293331 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-02 17:43:59.293336 | orchestrator | Monday 02 June 2025 17:36:34 +0000 (0:00:00.290) 0:03:53.629 *********** 2025-06-02 17:43:59.293341 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.293347 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.293352 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.293357 | orchestrator | 2025-06-02 17:43:59.293363 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-02 17:43:59.293368 | orchestrator | Monday 02 June 2025 17:36:35 +0000 (0:00:00.607) 0:03:54.236 *********** 2025-06-02 17:43:59.293374 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.293379 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.293384 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.293390 | orchestrator | 2025-06-02 17:43:59.293395 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-02 17:43:59.293405 | orchestrator | Monday 02 June 2025 17:36:35 +0000 (0:00:00.320) 0:03:54.557 *********** 2025-06-02 17:43:59.293410 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.293416 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.293421 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.293426 | orchestrator | 2025-06-02 17:43:59.293432 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-02 17:43:59.293437 | orchestrator | Monday 02 June 2025 17:36:36 +0000 (0:00:00.326) 0:03:54.883 *********** 2025-06-02 17:43:59.293443 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.293448 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.293454 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.293459 | orchestrator | 2025-06-02 17:43:59.293464 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-02 17:43:59.293474 | orchestrator | Monday 02 June 2025 17:36:36 +0000 (0:00:00.301) 0:03:55.185 *********** 2025-06-02 17:43:59.293479 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.293484 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.293490 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.293495 | orchestrator | 2025-06-02 17:43:59.293500 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-02 17:43:59.293506 | orchestrator | Monday 02 June 2025 17:36:37 +0000 (0:00:00.588) 0:03:55.774 *********** 2025-06-02 17:43:59.293511 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.293517 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.293522 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.293527 | orchestrator | 2025-06-02 17:43:59.293533 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-02 17:43:59.293538 | orchestrator | Monday 02 June 2025 17:36:37 +0000 (0:00:00.355) 0:03:56.130 *********** 2025-06-02 17:43:59.293543 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.293549 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.293554 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.293560 | orchestrator | 2025-06-02 17:43:59.293565 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-02 17:43:59.293571 | orchestrator | Monday 02 June 2025 17:36:37 +0000 (0:00:00.346) 0:03:56.477 *********** 2025-06-02 17:43:59.293576 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.293582 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.293587 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.293593 | orchestrator | 2025-06-02 17:43:59.293598 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-06-02 17:43:59.293604 | orchestrator | Monday 02 June 2025 17:36:38 +0000 (0:00:00.856) 0:03:57.333 *********** 2025-06-02 17:43:59.293609 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.293614 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.293620 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.293625 | orchestrator | 2025-06-02 17:43:59.293630 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-06-02 17:43:59.293636 | orchestrator | Monday 02 June 2025 17:36:38 +0000 (0:00:00.322) 0:03:57.656 *********** 2025-06-02 17:43:59.293641 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:43:59.293647 | orchestrator | 2025-06-02 17:43:59.293653 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-06-02 17:43:59.293658 | orchestrator | Monday 02 June 2025 17:36:39 +0000 (0:00:00.613) 0:03:58.269 *********** 2025-06-02 17:43:59.293663 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.293669 | orchestrator | 2025-06-02 17:43:59.293674 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-06-02 17:43:59.293680 | orchestrator | Monday 02 June 2025 17:36:39 +0000 (0:00:00.155) 0:03:58.424 *********** 2025-06-02 17:43:59.293685 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-02 17:43:59.293690 | orchestrator | 2025-06-02 17:43:59.293696 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-06-02 17:43:59.293701 | orchestrator | Monday 02 June 2025 17:36:41 +0000 (0:00:01.872) 0:04:00.296 *********** 2025-06-02 17:43:59.293707 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.293712 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.293718 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.293723 | orchestrator | 2025-06-02 17:43:59.293729 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-06-02 17:43:59.293734 | orchestrator | Monday 02 June 2025 17:36:42 +0000 (0:00:00.404) 0:04:00.701 *********** 2025-06-02 17:43:59.293740 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.293745 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.293751 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.293756 | orchestrator | 2025-06-02 17:43:59.293764 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-06-02 17:43:59.293774 | orchestrator | Monday 02 June 2025 17:36:42 +0000 (0:00:00.348) 0:04:01.049 *********** 2025-06-02 17:43:59.293779 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:59.293785 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:59.293790 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:59.293796 | orchestrator | 2025-06-02 17:43:59.293801 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-06-02 17:43:59.293806 | orchestrator | Monday 02 June 2025 17:36:43 +0000 (0:00:01.266) 0:04:02.316 *********** 2025-06-02 17:43:59.293812 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:59.293817 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:59.293823 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:59.293828 | orchestrator | 2025-06-02 17:43:59.293834 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-06-02 17:43:59.293839 | orchestrator | Monday 02 June 2025 17:36:44 +0000 (0:00:01.156) 0:04:03.472 *********** 2025-06-02 17:43:59.293845 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:59.293850 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:59.293855 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:59.293861 | orchestrator | 2025-06-02 17:43:59.293866 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-06-02 17:43:59.293872 | orchestrator | Monday 02 June 2025 17:36:45 +0000 (0:00:00.732) 0:04:04.204 *********** 2025-06-02 17:43:59.293877 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.293883 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.293888 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.293894 | orchestrator | 2025-06-02 17:43:59.293899 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-06-02 17:43:59.293908 | orchestrator | Monday 02 June 2025 17:36:46 +0000 (0:00:00.674) 0:04:04.879 *********** 2025-06-02 17:43:59.293914 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:59.293919 | orchestrator | 2025-06-02 17:43:59.293925 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-06-02 17:43:59.293930 | orchestrator | Monday 02 June 2025 17:36:47 +0000 (0:00:01.288) 0:04:06.168 *********** 2025-06-02 17:43:59.293936 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.293941 | orchestrator | 2025-06-02 17:43:59.293947 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-06-02 17:43:59.293952 | orchestrator | Monday 02 June 2025 17:36:48 +0000 (0:00:00.877) 0:04:07.045 *********** 2025-06-02 17:43:59.293958 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-06-02 17:43:59.293963 | orchestrator | changed: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:43:59.293969 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:43:59.293974 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 17:43:59.293980 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-06-02 17:43:59.293986 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 17:43:59.293991 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 17:43:59.293997 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-06-02 17:43:59.294002 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 17:43:59.294008 | orchestrator | changed: [testbed-node-1 -> {{ item }}] 2025-06-02 17:43:59.294123 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-06-02 17:43:59.294132 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-06-02 17:43:59.294138 | orchestrator | 2025-06-02 17:43:59.294143 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-06-02 17:43:59.294149 | orchestrator | Monday 02 June 2025 17:36:51 +0000 (0:00:03.525) 0:04:10.571 *********** 2025-06-02 17:43:59.294154 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:59.294160 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:59.294165 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:59.294175 | orchestrator | 2025-06-02 17:43:59.294181 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-06-02 17:43:59.294186 | orchestrator | Monday 02 June 2025 17:36:53 +0000 (0:00:01.493) 0:04:12.065 *********** 2025-06-02 17:43:59.294192 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.294197 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.294203 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.294208 | orchestrator | 2025-06-02 17:43:59.294214 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-06-02 17:43:59.294219 | orchestrator | Monday 02 June 2025 17:36:53 +0000 (0:00:00.334) 0:04:12.399 *********** 2025-06-02 17:43:59.294225 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.294230 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.294235 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.294241 | orchestrator | 2025-06-02 17:43:59.294246 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-06-02 17:43:59.294252 | orchestrator | Monday 02 June 2025 17:36:54 +0000 (0:00:00.372) 0:04:12.772 *********** 2025-06-02 17:43:59.294257 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:59.294263 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:59.294268 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:59.294274 | orchestrator | 2025-06-02 17:43:59.294279 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-06-02 17:43:59.294285 | orchestrator | Monday 02 June 2025 17:36:55 +0000 (0:00:01.905) 0:04:14.678 *********** 2025-06-02 17:43:59.294290 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:59.294295 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:59.294301 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:59.294306 | orchestrator | 2025-06-02 17:43:59.294311 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-06-02 17:43:59.294317 | orchestrator | Monday 02 June 2025 17:36:58 +0000 (0:00:02.058) 0:04:16.736 *********** 2025-06-02 17:43:59.294322 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.294328 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.294333 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.294338 | orchestrator | 2025-06-02 17:43:59.294348 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-06-02 17:43:59.294354 | orchestrator | Monday 02 June 2025 17:36:58 +0000 (0:00:00.358) 0:04:17.095 *********** 2025-06-02 17:43:59.294359 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:43:59.294365 | orchestrator | 2025-06-02 17:43:59.294370 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-06-02 17:43:59.294376 | orchestrator | Monday 02 June 2025 17:36:58 +0000 (0:00:00.560) 0:04:17.655 *********** 2025-06-02 17:43:59.294381 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.294387 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.294392 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.294398 | orchestrator | 2025-06-02 17:43:59.294403 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-06-02 17:43:59.294409 | orchestrator | Monday 02 June 2025 17:36:59 +0000 (0:00:00.562) 0:04:18.217 *********** 2025-06-02 17:43:59.294414 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.294420 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.294425 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.294431 | orchestrator | 2025-06-02 17:43:59.294436 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-06-02 17:43:59.294441 | orchestrator | Monday 02 June 2025 17:36:59 +0000 (0:00:00.304) 0:04:18.522 *********** 2025-06-02 17:43:59.294447 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:43:59.294453 | orchestrator | 2025-06-02 17:43:59.294458 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-06-02 17:43:59.294489 | orchestrator | Monday 02 June 2025 17:37:00 +0000 (0:00:00.562) 0:04:19.085 *********** 2025-06-02 17:43:59.294496 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:59.294502 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:59.294507 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:59.294513 | orchestrator | 2025-06-02 17:43:59.294518 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-06-02 17:43:59.294524 | orchestrator | Monday 02 June 2025 17:37:02 +0000 (0:00:01.920) 0:04:21.005 *********** 2025-06-02 17:43:59.294529 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:59.294535 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:59.294540 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:59.294545 | orchestrator | 2025-06-02 17:43:59.294551 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-06-02 17:43:59.294556 | orchestrator | Monday 02 June 2025 17:37:03 +0000 (0:00:01.212) 0:04:22.218 *********** 2025-06-02 17:43:59.294562 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:59.294567 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:59.294572 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:59.294578 | orchestrator | 2025-06-02 17:43:59.294583 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-06-02 17:43:59.294588 | orchestrator | Monday 02 June 2025 17:37:05 +0000 (0:00:01.906) 0:04:24.124 *********** 2025-06-02 17:43:59.294593 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:59.294598 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:59.294603 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:59.294608 | orchestrator | 2025-06-02 17:43:59.294613 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-06-02 17:43:59.294617 | orchestrator | Monday 02 June 2025 17:37:07 +0000 (0:00:02.370) 0:04:26.494 *********** 2025-06-02 17:43:59.294622 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:43:59.294627 | orchestrator | 2025-06-02 17:43:59.294632 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-06-02 17:43:59.294637 | orchestrator | Monday 02 June 2025 17:37:08 +0000 (0:00:00.848) 0:04:27.343 *********** 2025-06-02 17:43:59.294641 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-06-02 17:43:59.294646 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.294651 | orchestrator | 2025-06-02 17:43:59.294656 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-06-02 17:43:59.294661 | orchestrator | Monday 02 June 2025 17:37:30 +0000 (0:00:21.982) 0:04:49.325 *********** 2025-06-02 17:43:59.294666 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.294670 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.294675 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.294680 | orchestrator | 2025-06-02 17:43:59.294685 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-06-02 17:43:59.294690 | orchestrator | Monday 02 June 2025 17:37:41 +0000 (0:00:10.523) 0:04:59.849 *********** 2025-06-02 17:43:59.294694 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.294699 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.294704 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.294709 | orchestrator | 2025-06-02 17:43:59.294714 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-06-02 17:43:59.294718 | orchestrator | Monday 02 June 2025 17:37:41 +0000 (0:00:00.319) 0:05:00.168 *********** 2025-06-02 17:43:59.294725 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ce4c90ec6c7e6e48148577f756af2db83ca0326f'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-06-02 17:43:59.294735 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ce4c90ec6c7e6e48148577f756af2db83ca0326f'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-06-02 17:43:59.294745 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ce4c90ec6c7e6e48148577f756af2db83ca0326f'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-06-02 17:43:59.294751 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ce4c90ec6c7e6e48148577f756af2db83ca0326f'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-06-02 17:43:59.294771 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ce4c90ec6c7e6e48148577f756af2db83ca0326f'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-06-02 17:43:59.294779 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ce4c90ec6c7e6e48148577f756af2db83ca0326f'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__ce4c90ec6c7e6e48148577f756af2db83ca0326f'}])  2025-06-02 17:43:59.294786 | orchestrator | 2025-06-02 17:43:59.294792 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-02 17:43:59.294797 | orchestrator | Monday 02 June 2025 17:37:56 +0000 (0:00:15.259) 0:05:15.428 *********** 2025-06-02 17:43:59.294803 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.294808 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.294813 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.294819 | orchestrator | 2025-06-02 17:43:59.294824 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-06-02 17:43:59.294830 | orchestrator | Monday 02 June 2025 17:37:57 +0000 (0:00:00.450) 0:05:15.879 *********** 2025-06-02 17:43:59.294835 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:43:59.294841 | orchestrator | 2025-06-02 17:43:59.294846 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-06-02 17:43:59.294852 | orchestrator | Monday 02 June 2025 17:37:57 +0000 (0:00:00.800) 0:05:16.680 *********** 2025-06-02 17:43:59.294857 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.294863 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.294868 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.294874 | orchestrator | 2025-06-02 17:43:59.294879 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-06-02 17:43:59.294885 | orchestrator | Monday 02 June 2025 17:37:58 +0000 (0:00:00.335) 0:05:17.015 *********** 2025-06-02 17:43:59.294890 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.294895 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.294901 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.294906 | orchestrator | 2025-06-02 17:43:59.294912 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-06-02 17:43:59.294917 | orchestrator | Monday 02 June 2025 17:37:58 +0000 (0:00:00.334) 0:05:17.350 *********** 2025-06-02 17:43:59.294923 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-02 17:43:59.294934 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-02 17:43:59.294940 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-02 17:43:59.294945 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.294951 | orchestrator | 2025-06-02 17:43:59.294956 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-06-02 17:43:59.294962 | orchestrator | Monday 02 June 2025 17:37:59 +0000 (0:00:00.800) 0:05:18.150 *********** 2025-06-02 17:43:59.294967 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.294973 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.294978 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.294984 | orchestrator | 2025-06-02 17:43:59.294989 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-06-02 17:43:59.294995 | orchestrator | 2025-06-02 17:43:59.295000 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-02 17:43:59.295006 | orchestrator | Monday 02 June 2025 17:38:00 +0000 (0:00:00.823) 0:05:18.973 *********** 2025-06-02 17:43:59.295012 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:43:59.295029 | orchestrator | 2025-06-02 17:43:59.295034 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-02 17:43:59.295040 | orchestrator | Monday 02 June 2025 17:38:00 +0000 (0:00:00.440) 0:05:19.414 *********** 2025-06-02 17:43:59.295049 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:43:59.295055 | orchestrator | 2025-06-02 17:43:59.295060 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-02 17:43:59.295066 | orchestrator | Monday 02 June 2025 17:38:01 +0000 (0:00:00.637) 0:05:20.051 *********** 2025-06-02 17:43:59.295071 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.295076 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.295082 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.295087 | orchestrator | 2025-06-02 17:43:59.295093 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-02 17:43:59.295099 | orchestrator | Monday 02 June 2025 17:38:02 +0000 (0:00:00.709) 0:05:20.761 *********** 2025-06-02 17:43:59.295104 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.295109 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.295115 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.295120 | orchestrator | 2025-06-02 17:43:59.295126 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-02 17:43:59.295131 | orchestrator | Monday 02 June 2025 17:38:02 +0000 (0:00:00.258) 0:05:21.019 *********** 2025-06-02 17:43:59.295136 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.295141 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.295145 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.295150 | orchestrator | 2025-06-02 17:43:59.295155 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-02 17:43:59.295160 | orchestrator | Monday 02 June 2025 17:38:02 +0000 (0:00:00.433) 0:05:21.452 *********** 2025-06-02 17:43:59.295165 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.295170 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.295175 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.295179 | orchestrator | 2025-06-02 17:43:59.295201 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-02 17:43:59.295207 | orchestrator | Monday 02 June 2025 17:38:03 +0000 (0:00:00.285) 0:05:21.738 *********** 2025-06-02 17:43:59.295212 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.295217 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.295221 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.295229 | orchestrator | 2025-06-02 17:43:59.295236 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-02 17:43:59.295244 | orchestrator | Monday 02 June 2025 17:38:03 +0000 (0:00:00.764) 0:05:22.503 *********** 2025-06-02 17:43:59.295263 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.295278 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.295285 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.295293 | orchestrator | 2025-06-02 17:43:59.295301 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-02 17:43:59.295309 | orchestrator | Monday 02 June 2025 17:38:04 +0000 (0:00:00.241) 0:05:22.744 *********** 2025-06-02 17:43:59.295316 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.295324 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.295332 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.295339 | orchestrator | 2025-06-02 17:43:59.295346 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-02 17:43:59.295353 | orchestrator | Monday 02 June 2025 17:38:04 +0000 (0:00:00.447) 0:05:23.192 *********** 2025-06-02 17:43:59.295361 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.295368 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.295375 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.295383 | orchestrator | 2025-06-02 17:43:59.295391 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-02 17:43:59.295398 | orchestrator | Monday 02 June 2025 17:38:05 +0000 (0:00:00.679) 0:05:23.871 *********** 2025-06-02 17:43:59.295406 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.295414 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.295421 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.295428 | orchestrator | 2025-06-02 17:43:59.295436 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-02 17:43:59.295443 | orchestrator | Monday 02 June 2025 17:38:05 +0000 (0:00:00.733) 0:05:24.605 *********** 2025-06-02 17:43:59.295452 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.295459 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.295467 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.295474 | orchestrator | 2025-06-02 17:43:59.295481 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-02 17:43:59.295489 | orchestrator | Monday 02 June 2025 17:38:06 +0000 (0:00:00.278) 0:05:24.883 *********** 2025-06-02 17:43:59.295496 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.295504 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.295513 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.295521 | orchestrator | 2025-06-02 17:43:59.295528 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-02 17:43:59.295535 | orchestrator | Monday 02 June 2025 17:38:06 +0000 (0:00:00.487) 0:05:25.371 *********** 2025-06-02 17:43:59.295543 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.295552 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.295560 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.295567 | orchestrator | 2025-06-02 17:43:59.295575 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-02 17:43:59.295582 | orchestrator | Monday 02 June 2025 17:38:06 +0000 (0:00:00.253) 0:05:25.624 *********** 2025-06-02 17:43:59.295590 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.295598 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.295605 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.295612 | orchestrator | 2025-06-02 17:43:59.295619 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-02 17:43:59.295626 | orchestrator | Monday 02 June 2025 17:38:07 +0000 (0:00:00.319) 0:05:25.943 *********** 2025-06-02 17:43:59.295633 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.295641 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.295649 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.295655 | orchestrator | 2025-06-02 17:43:59.295663 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-02 17:43:59.295671 | orchestrator | Monday 02 June 2025 17:38:07 +0000 (0:00:00.369) 0:05:26.313 *********** 2025-06-02 17:43:59.295678 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.295693 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.295701 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.295708 | orchestrator | 2025-06-02 17:43:59.295721 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-02 17:43:59.295731 | orchestrator | Monday 02 June 2025 17:38:08 +0000 (0:00:00.525) 0:05:26.839 *********** 2025-06-02 17:43:59.295739 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.295747 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.295755 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.295763 | orchestrator | 2025-06-02 17:43:59.295770 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-02 17:43:59.295778 | orchestrator | Monday 02 June 2025 17:38:08 +0000 (0:00:00.302) 0:05:27.142 *********** 2025-06-02 17:43:59.295785 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.295793 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.295800 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.295808 | orchestrator | 2025-06-02 17:43:59.295816 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-02 17:43:59.295824 | orchestrator | Monday 02 June 2025 17:38:08 +0000 (0:00:00.381) 0:05:27.523 *********** 2025-06-02 17:43:59.295832 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.295839 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.295847 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.295856 | orchestrator | 2025-06-02 17:43:59.295864 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-02 17:43:59.295871 | orchestrator | Monday 02 June 2025 17:38:09 +0000 (0:00:00.346) 0:05:27.869 *********** 2025-06-02 17:43:59.295876 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.295881 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.295885 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.295890 | orchestrator | 2025-06-02 17:43:59.295895 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-06-02 17:43:59.295937 | orchestrator | Monday 02 June 2025 17:38:10 +0000 (0:00:00.836) 0:05:28.706 *********** 2025-06-02 17:43:59.295943 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 17:43:59.295948 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 17:43:59.295953 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 17:43:59.295958 | orchestrator | 2025-06-02 17:43:59.295963 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-06-02 17:43:59.295967 | orchestrator | Monday 02 June 2025 17:38:10 +0000 (0:00:00.613) 0:05:29.319 *********** 2025-06-02 17:43:59.295972 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:43:59.295977 | orchestrator | 2025-06-02 17:43:59.295982 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-06-02 17:43:59.295987 | orchestrator | Monday 02 June 2025 17:38:11 +0000 (0:00:00.654) 0:05:29.973 *********** 2025-06-02 17:43:59.295992 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:59.295997 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:59.296002 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:59.296007 | orchestrator | 2025-06-02 17:43:59.296011 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-06-02 17:43:59.296058 | orchestrator | Monday 02 June 2025 17:38:12 +0000 (0:00:00.934) 0:05:30.908 *********** 2025-06-02 17:43:59.296064 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.296069 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.296074 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.296078 | orchestrator | 2025-06-02 17:43:59.296083 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-06-02 17:43:59.296088 | orchestrator | Monday 02 June 2025 17:38:12 +0000 (0:00:00.313) 0:05:31.222 *********** 2025-06-02 17:43:59.296093 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 17:43:59.296105 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 17:43:59.296110 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 17:43:59.296115 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-06-02 17:43:59.296119 | orchestrator | 2025-06-02 17:43:59.296125 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-06-02 17:43:59.296129 | orchestrator | Monday 02 June 2025 17:38:23 +0000 (0:00:10.533) 0:05:41.755 *********** 2025-06-02 17:43:59.296134 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.296139 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.296144 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.296149 | orchestrator | 2025-06-02 17:43:59.296153 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-06-02 17:43:59.296158 | orchestrator | Monday 02 June 2025 17:38:23 +0000 (0:00:00.343) 0:05:42.099 *********** 2025-06-02 17:43:59.296163 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-06-02 17:43:59.296168 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-02 17:43:59.296173 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-02 17:43:59.296178 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-06-02 17:43:59.296182 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:43:59.296187 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:43:59.296192 | orchestrator | 2025-06-02 17:43:59.296197 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-06-02 17:43:59.296202 | orchestrator | Monday 02 June 2025 17:38:25 +0000 (0:00:02.444) 0:05:44.544 *********** 2025-06-02 17:43:59.296207 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-06-02 17:43:59.296212 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-02 17:43:59.296216 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-02 17:43:59.296221 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 17:43:59.296226 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-06-02 17:43:59.296231 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-06-02 17:43:59.296236 | orchestrator | 2025-06-02 17:43:59.296241 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-06-02 17:43:59.296246 | orchestrator | Monday 02 June 2025 17:38:27 +0000 (0:00:01.524) 0:05:46.069 *********** 2025-06-02 17:43:59.296255 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.296260 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.296266 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.296271 | orchestrator | 2025-06-02 17:43:59.296275 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-06-02 17:43:59.296281 | orchestrator | Monday 02 June 2025 17:38:28 +0000 (0:00:00.742) 0:05:46.811 *********** 2025-06-02 17:43:59.296285 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.296290 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.296295 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.296300 | orchestrator | 2025-06-02 17:43:59.296305 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-06-02 17:43:59.296310 | orchestrator | Monday 02 June 2025 17:38:28 +0000 (0:00:00.320) 0:05:47.132 *********** 2025-06-02 17:43:59.296315 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.296320 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.296325 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.296330 | orchestrator | 2025-06-02 17:43:59.296335 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-06-02 17:43:59.296340 | orchestrator | Monday 02 June 2025 17:38:28 +0000 (0:00:00.292) 0:05:47.424 *********** 2025-06-02 17:43:59.296344 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:43:59.296349 | orchestrator | 2025-06-02 17:43:59.296354 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-06-02 17:43:59.296364 | orchestrator | Monday 02 June 2025 17:38:29 +0000 (0:00:00.859) 0:05:48.284 *********** 2025-06-02 17:43:59.296369 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.296394 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.296400 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.296405 | orchestrator | 2025-06-02 17:43:59.296410 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-06-02 17:43:59.296415 | orchestrator | Monday 02 June 2025 17:38:29 +0000 (0:00:00.336) 0:05:48.620 *********** 2025-06-02 17:43:59.296420 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.296425 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.296430 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.296435 | orchestrator | 2025-06-02 17:43:59.296439 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-06-02 17:43:59.296444 | orchestrator | Monday 02 June 2025 17:38:30 +0000 (0:00:00.303) 0:05:48.924 *********** 2025-06-02 17:43:59.296449 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:43:59.296454 | orchestrator | 2025-06-02 17:43:59.296459 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-06-02 17:43:59.296464 | orchestrator | Monday 02 June 2025 17:38:31 +0000 (0:00:00.810) 0:05:49.734 *********** 2025-06-02 17:43:59.296468 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:59.296473 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:59.296478 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:59.296485 | orchestrator | 2025-06-02 17:43:59.296493 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-06-02 17:43:59.296500 | orchestrator | Monday 02 June 2025 17:38:32 +0000 (0:00:01.246) 0:05:50.981 *********** 2025-06-02 17:43:59.296511 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:59.296521 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:59.296528 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:59.296536 | orchestrator | 2025-06-02 17:43:59.296543 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-06-02 17:43:59.296551 | orchestrator | Monday 02 June 2025 17:38:33 +0000 (0:00:01.197) 0:05:52.178 *********** 2025-06-02 17:43:59.296558 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:59.296565 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:59.296572 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:59.296580 | orchestrator | 2025-06-02 17:43:59.296587 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-06-02 17:43:59.296595 | orchestrator | Monday 02 June 2025 17:38:35 +0000 (0:00:02.142) 0:05:54.320 *********** 2025-06-02 17:43:59.296602 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:59.296609 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:59.296618 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:59.296626 | orchestrator | 2025-06-02 17:43:59.296634 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-06-02 17:43:59.296642 | orchestrator | Monday 02 June 2025 17:38:37 +0000 (0:00:01.980) 0:05:56.301 *********** 2025-06-02 17:43:59.296647 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.296651 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.296656 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-06-02 17:43:59.296661 | orchestrator | 2025-06-02 17:43:59.296665 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-06-02 17:43:59.296670 | orchestrator | Monday 02 June 2025 17:38:38 +0000 (0:00:00.428) 0:05:56.729 *********** 2025-06-02 17:43:59.296674 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-06-02 17:43:59.296679 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-06-02 17:43:59.296683 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-06-02 17:43:59.296726 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-06-02 17:43:59.296731 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2025-06-02 17:43:59.296736 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-06-02 17:43:59.296741 | orchestrator | 2025-06-02 17:43:59.296745 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-06-02 17:43:59.296753 | orchestrator | Monday 02 June 2025 17:39:08 +0000 (0:00:30.347) 0:06:27.077 *********** 2025-06-02 17:43:59.296758 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-06-02 17:43:59.296763 | orchestrator | 2025-06-02 17:43:59.296767 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-06-02 17:43:59.296772 | orchestrator | Monday 02 June 2025 17:39:10 +0000 (0:00:01.654) 0:06:28.732 *********** 2025-06-02 17:43:59.296776 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.296781 | orchestrator | 2025-06-02 17:43:59.296785 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-06-02 17:43:59.296790 | orchestrator | Monday 02 June 2025 17:39:11 +0000 (0:00:01.005) 0:06:29.737 *********** 2025-06-02 17:43:59.296794 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.296799 | orchestrator | 2025-06-02 17:43:59.296804 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-06-02 17:43:59.296808 | orchestrator | Monday 02 June 2025 17:39:11 +0000 (0:00:00.145) 0:06:29.883 *********** 2025-06-02 17:43:59.296812 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-06-02 17:43:59.296817 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-06-02 17:43:59.296822 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-06-02 17:43:59.296826 | orchestrator | 2025-06-02 17:43:59.296831 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-06-02 17:43:59.296835 | orchestrator | Monday 02 June 2025 17:39:17 +0000 (0:00:06.547) 0:06:36.430 *********** 2025-06-02 17:43:59.296840 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-06-02 17:43:59.296869 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-06-02 17:43:59.296875 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-06-02 17:43:59.296880 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-06-02 17:43:59.296884 | orchestrator | 2025-06-02 17:43:59.296889 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-02 17:43:59.296893 | orchestrator | Monday 02 June 2025 17:39:22 +0000 (0:00:04.765) 0:06:41.196 *********** 2025-06-02 17:43:59.296898 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:59.296902 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:59.296907 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:59.296912 | orchestrator | 2025-06-02 17:43:59.296917 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-06-02 17:43:59.296921 | orchestrator | Monday 02 June 2025 17:39:23 +0000 (0:00:00.967) 0:06:42.163 *********** 2025-06-02 17:43:59.296926 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:43:59.296931 | orchestrator | 2025-06-02 17:43:59.296935 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-06-02 17:43:59.296940 | orchestrator | Monday 02 June 2025 17:39:24 +0000 (0:00:00.552) 0:06:42.716 *********** 2025-06-02 17:43:59.296944 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.296949 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.296953 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.296958 | orchestrator | 2025-06-02 17:43:59.296962 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-06-02 17:43:59.296967 | orchestrator | Monday 02 June 2025 17:39:24 +0000 (0:00:00.298) 0:06:43.014 *********** 2025-06-02 17:43:59.296975 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:59.296980 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:59.296984 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:59.296989 | orchestrator | 2025-06-02 17:43:59.296993 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-06-02 17:43:59.296998 | orchestrator | Monday 02 June 2025 17:39:26 +0000 (0:00:01.798) 0:06:44.813 *********** 2025-06-02 17:43:59.297002 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-02 17:43:59.297007 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-02 17:43:59.297011 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-02 17:43:59.297028 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.297033 | orchestrator | 2025-06-02 17:43:59.297038 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-06-02 17:43:59.297042 | orchestrator | Monday 02 June 2025 17:39:26 +0000 (0:00:00.658) 0:06:45.472 *********** 2025-06-02 17:43:59.297047 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.297052 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.297056 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.297061 | orchestrator | 2025-06-02 17:43:59.297065 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-06-02 17:43:59.297070 | orchestrator | 2025-06-02 17:43:59.297074 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-02 17:43:59.297079 | orchestrator | Monday 02 June 2025 17:39:27 +0000 (0:00:00.533) 0:06:46.005 *********** 2025-06-02 17:43:59.297084 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:43:59.297089 | orchestrator | 2025-06-02 17:43:59.297094 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-02 17:43:59.297099 | orchestrator | Monday 02 June 2025 17:39:28 +0000 (0:00:00.797) 0:06:46.803 *********** 2025-06-02 17:43:59.297103 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:43:59.297108 | orchestrator | 2025-06-02 17:43:59.297112 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-02 17:43:59.297117 | orchestrator | Monday 02 June 2025 17:39:28 +0000 (0:00:00.547) 0:06:47.350 *********** 2025-06-02 17:43:59.297121 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.297126 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.297130 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.297135 | orchestrator | 2025-06-02 17:43:59.297142 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-02 17:43:59.297147 | orchestrator | Monday 02 June 2025 17:39:28 +0000 (0:00:00.320) 0:06:47.670 *********** 2025-06-02 17:43:59.297152 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.297156 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.297161 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.297166 | orchestrator | 2025-06-02 17:43:59.297171 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-02 17:43:59.297175 | orchestrator | Monday 02 June 2025 17:39:30 +0000 (0:00:01.031) 0:06:48.702 *********** 2025-06-02 17:43:59.297179 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.297184 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.297189 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.297193 | orchestrator | 2025-06-02 17:43:59.297198 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-02 17:43:59.297202 | orchestrator | Monday 02 June 2025 17:39:30 +0000 (0:00:00.696) 0:06:49.398 *********** 2025-06-02 17:43:59.297206 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.297211 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.297215 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.297220 | orchestrator | 2025-06-02 17:43:59.297224 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-02 17:43:59.297233 | orchestrator | Monday 02 June 2025 17:39:31 +0000 (0:00:00.759) 0:06:50.157 *********** 2025-06-02 17:43:59.297238 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.297243 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.297247 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.297252 | orchestrator | 2025-06-02 17:43:59.297256 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-02 17:43:59.297277 | orchestrator | Monday 02 June 2025 17:39:31 +0000 (0:00:00.318) 0:06:50.476 *********** 2025-06-02 17:43:59.297283 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.297288 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.297292 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.297297 | orchestrator | 2025-06-02 17:43:59.297301 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-02 17:43:59.297306 | orchestrator | Monday 02 June 2025 17:39:32 +0000 (0:00:00.595) 0:06:51.072 *********** 2025-06-02 17:43:59.297310 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.297315 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.297319 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.297324 | orchestrator | 2025-06-02 17:43:59.297329 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-02 17:43:59.297333 | orchestrator | Monday 02 June 2025 17:39:32 +0000 (0:00:00.300) 0:06:51.372 *********** 2025-06-02 17:43:59.297338 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.297342 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.297347 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.297351 | orchestrator | 2025-06-02 17:43:59.297356 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-02 17:43:59.297360 | orchestrator | Monday 02 June 2025 17:39:33 +0000 (0:00:00.643) 0:06:52.016 *********** 2025-06-02 17:43:59.297365 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.297369 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.297374 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.297378 | orchestrator | 2025-06-02 17:43:59.297383 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-02 17:43:59.297387 | orchestrator | Monday 02 June 2025 17:39:34 +0000 (0:00:00.695) 0:06:52.711 *********** 2025-06-02 17:43:59.297392 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.297396 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.297401 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.297406 | orchestrator | 2025-06-02 17:43:59.297410 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-02 17:43:59.297415 | orchestrator | Monday 02 June 2025 17:39:34 +0000 (0:00:00.616) 0:06:53.328 *********** 2025-06-02 17:43:59.297419 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.297424 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.297428 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.297433 | orchestrator | 2025-06-02 17:43:59.297437 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-02 17:43:59.297442 | orchestrator | Monday 02 June 2025 17:39:34 +0000 (0:00:00.349) 0:06:53.678 *********** 2025-06-02 17:43:59.297446 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.297451 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.297455 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.297460 | orchestrator | 2025-06-02 17:43:59.297465 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-02 17:43:59.297469 | orchestrator | Monday 02 June 2025 17:39:35 +0000 (0:00:00.316) 0:06:53.994 *********** 2025-06-02 17:43:59.297474 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.297478 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.297483 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.297487 | orchestrator | 2025-06-02 17:43:59.297492 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-02 17:43:59.297496 | orchestrator | Monday 02 June 2025 17:39:35 +0000 (0:00:00.359) 0:06:54.353 *********** 2025-06-02 17:43:59.297505 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.297509 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.297514 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.297518 | orchestrator | 2025-06-02 17:43:59.297523 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-02 17:43:59.297527 | orchestrator | Monday 02 June 2025 17:39:36 +0000 (0:00:00.631) 0:06:54.985 *********** 2025-06-02 17:43:59.297532 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.297537 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.297541 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.297546 | orchestrator | 2025-06-02 17:43:59.297550 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-02 17:43:59.297555 | orchestrator | Monday 02 June 2025 17:39:36 +0000 (0:00:00.302) 0:06:55.287 *********** 2025-06-02 17:43:59.297559 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.297564 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.297568 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.297573 | orchestrator | 2025-06-02 17:43:59.297577 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-02 17:43:59.297585 | orchestrator | Monday 02 June 2025 17:39:36 +0000 (0:00:00.298) 0:06:55.585 *********** 2025-06-02 17:43:59.297590 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.297594 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.297599 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.297604 | orchestrator | 2025-06-02 17:43:59.297609 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-02 17:43:59.297613 | orchestrator | Monday 02 June 2025 17:39:37 +0000 (0:00:00.307) 0:06:55.892 *********** 2025-06-02 17:43:59.297618 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.297622 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.297627 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.297631 | orchestrator | 2025-06-02 17:43:59.297636 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-02 17:43:59.297640 | orchestrator | Monday 02 June 2025 17:39:37 +0000 (0:00:00.626) 0:06:56.519 *********** 2025-06-02 17:43:59.297645 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.297650 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.297654 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.297659 | orchestrator | 2025-06-02 17:43:59.297663 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-06-02 17:43:59.297668 | orchestrator | Monday 02 June 2025 17:39:38 +0000 (0:00:00.548) 0:06:57.067 *********** 2025-06-02 17:43:59.297672 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.297677 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.297681 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.297686 | orchestrator | 2025-06-02 17:43:59.297691 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-06-02 17:43:59.297695 | orchestrator | Monday 02 June 2025 17:39:38 +0000 (0:00:00.296) 0:06:57.364 *********** 2025-06-02 17:43:59.297703 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-02 17:43:59.297707 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 17:43:59.297712 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 17:43:59.297717 | orchestrator | 2025-06-02 17:43:59.297721 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-06-02 17:43:59.297726 | orchestrator | Monday 02 June 2025 17:39:39 +0000 (0:00:00.909) 0:06:58.273 *********** 2025-06-02 17:43:59.297730 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:43:59.297735 | orchestrator | 2025-06-02 17:43:59.297740 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-06-02 17:43:59.297745 | orchestrator | Monday 02 June 2025 17:39:40 +0000 (0:00:00.805) 0:06:59.078 *********** 2025-06-02 17:43:59.297757 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.297761 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.297766 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.297770 | orchestrator | 2025-06-02 17:43:59.297775 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-06-02 17:43:59.297779 | orchestrator | Monday 02 June 2025 17:39:40 +0000 (0:00:00.334) 0:06:59.413 *********** 2025-06-02 17:43:59.297784 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.297789 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.297793 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.297798 | orchestrator | 2025-06-02 17:43:59.297802 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-06-02 17:43:59.297807 | orchestrator | Monday 02 June 2025 17:39:41 +0000 (0:00:00.334) 0:06:59.747 *********** 2025-06-02 17:43:59.297811 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.297816 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.297820 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.297825 | orchestrator | 2025-06-02 17:43:59.297829 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-06-02 17:43:59.297834 | orchestrator | Monday 02 June 2025 17:39:42 +0000 (0:00:00.970) 0:07:00.718 *********** 2025-06-02 17:43:59.297838 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.297843 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.297847 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.297852 | orchestrator | 2025-06-02 17:43:59.297856 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-06-02 17:43:59.297861 | orchestrator | Monday 02 June 2025 17:39:42 +0000 (0:00:00.340) 0:07:01.058 *********** 2025-06-02 17:43:59.297865 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-02 17:43:59.297870 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-02 17:43:59.297875 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-02 17:43:59.297879 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-02 17:43:59.297958 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-02 17:43:59.297966 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-02 17:43:59.297973 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-02 17:43:59.297978 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-02 17:43:59.297982 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-02 17:43:59.297987 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-02 17:43:59.297991 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-02 17:43:59.297996 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-02 17:43:59.298004 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-02 17:43:59.298009 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-02 17:43:59.298050 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-02 17:43:59.298055 | orchestrator | 2025-06-02 17:43:59.298060 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-06-02 17:43:59.298065 | orchestrator | Monday 02 June 2025 17:39:45 +0000 (0:00:03.068) 0:07:04.127 *********** 2025-06-02 17:43:59.298069 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.298074 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.298078 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.298087 | orchestrator | 2025-06-02 17:43:59.298092 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-06-02 17:43:59.298097 | orchestrator | Monday 02 June 2025 17:39:45 +0000 (0:00:00.287) 0:07:04.414 *********** 2025-06-02 17:43:59.298101 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:43:59.298106 | orchestrator | 2025-06-02 17:43:59.298111 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-06-02 17:43:59.298115 | orchestrator | Monday 02 June 2025 17:39:46 +0000 (0:00:00.794) 0:07:05.208 *********** 2025-06-02 17:43:59.298120 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-02 17:43:59.298124 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-02 17:43:59.298129 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-02 17:43:59.298140 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-06-02 17:43:59.298145 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-06-02 17:43:59.298150 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-06-02 17:43:59.298154 | orchestrator | 2025-06-02 17:43:59.298159 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-06-02 17:43:59.298163 | orchestrator | Monday 02 June 2025 17:39:47 +0000 (0:00:00.969) 0:07:06.177 *********** 2025-06-02 17:43:59.298168 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:43:59.298173 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-02 17:43:59.298177 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-02 17:43:59.298181 | orchestrator | 2025-06-02 17:43:59.298186 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-06-02 17:43:59.298191 | orchestrator | Monday 02 June 2025 17:39:49 +0000 (0:00:02.113) 0:07:08.291 *********** 2025-06-02 17:43:59.298195 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-02 17:43:59.298200 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-02 17:43:59.298204 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:43:59.298209 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-02 17:43:59.298213 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-02 17:43:59.298218 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:43:59.298222 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-02 17:43:59.298227 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-02 17:43:59.298231 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:43:59.298236 | orchestrator | 2025-06-02 17:43:59.298240 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-06-02 17:43:59.298245 | orchestrator | Monday 02 June 2025 17:39:51 +0000 (0:00:01.531) 0:07:09.822 *********** 2025-06-02 17:43:59.298250 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-02 17:43:59.298254 | orchestrator | 2025-06-02 17:43:59.298259 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-06-02 17:43:59.298264 | orchestrator | Monday 02 June 2025 17:39:53 +0000 (0:00:02.217) 0:07:12.040 *********** 2025-06-02 17:43:59.298271 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:43:59.298278 | orchestrator | 2025-06-02 17:43:59.298286 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-06-02 17:43:59.298293 | orchestrator | Monday 02 June 2025 17:39:53 +0000 (0:00:00.534) 0:07:12.574 *********** 2025-06-02 17:43:59.298301 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-33d58ee2-4c10-58b1-ba9c-becc4d68c01c', 'data_vg': 'ceph-33d58ee2-4c10-58b1-ba9c-becc4d68c01c'}) 2025-06-02 17:43:59.298309 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-de836c00-0412-5e15-aa8a-abef9bebfb26', 'data_vg': 'ceph-de836c00-0412-5e15-aa8a-abef9bebfb26'}) 2025-06-02 17:43:59.298322 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-94958c5d-ab49-5ebf-a5cb-ef67fe0a9704', 'data_vg': 'ceph-94958c5d-ab49-5ebf-a5cb-ef67fe0a9704'}) 2025-06-02 17:43:59.298330 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b', 'data_vg': 'ceph-a4a4ffc0-4b1a-5123-a777-2de0f9f46a6b'}) 2025-06-02 17:43:59.298337 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-42dde184-17ae-50b7-8921-f17969f5efd9', 'data_vg': 'ceph-42dde184-17ae-50b7-8921-f17969f5efd9'}) 2025-06-02 17:43:59.298345 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-c404b240-9cf0-5c0e-97ba-c570a8ba4cd9', 'data_vg': 'ceph-c404b240-9cf0-5c0e-97ba-c570a8ba4cd9'}) 2025-06-02 17:43:59.298352 | orchestrator | 2025-06-02 17:43:59.298360 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-06-02 17:43:59.298367 | orchestrator | Monday 02 June 2025 17:40:36 +0000 (0:00:43.050) 0:07:55.625 *********** 2025-06-02 17:43:59.298371 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.298376 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.298380 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.298385 | orchestrator | 2025-06-02 17:43:59.298390 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-06-02 17:43:59.298394 | orchestrator | Monday 02 June 2025 17:40:37 +0000 (0:00:00.633) 0:07:56.259 *********** 2025-06-02 17:43:59.298399 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:43:59.298403 | orchestrator | 2025-06-02 17:43:59.298408 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-06-02 17:43:59.298412 | orchestrator | Monday 02 June 2025 17:40:38 +0000 (0:00:00.539) 0:07:56.798 *********** 2025-06-02 17:43:59.298417 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.298421 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.298426 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.298430 | orchestrator | 2025-06-02 17:43:59.298435 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-06-02 17:43:59.298439 | orchestrator | Monday 02 June 2025 17:40:38 +0000 (0:00:00.663) 0:07:57.462 *********** 2025-06-02 17:43:59.298444 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.298448 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.298453 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.298457 | orchestrator | 2025-06-02 17:43:59.298462 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-06-02 17:43:59.298466 | orchestrator | Monday 02 June 2025 17:40:41 +0000 (0:00:02.971) 0:08:00.434 *********** 2025-06-02 17:43:59.298475 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:43:59.298480 | orchestrator | 2025-06-02 17:43:59.298485 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-06-02 17:43:59.298489 | orchestrator | Monday 02 June 2025 17:40:42 +0000 (0:00:00.548) 0:08:00.982 *********** 2025-06-02 17:43:59.298494 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:43:59.298498 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:43:59.298502 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:43:59.298507 | orchestrator | 2025-06-02 17:43:59.298511 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-06-02 17:43:59.298516 | orchestrator | Monday 02 June 2025 17:40:43 +0000 (0:00:01.151) 0:08:02.134 *********** 2025-06-02 17:43:59.298520 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:43:59.298525 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:43:59.298530 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:43:59.298534 | orchestrator | 2025-06-02 17:43:59.298539 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-06-02 17:43:59.298543 | orchestrator | Monday 02 June 2025 17:40:44 +0000 (0:00:01.386) 0:08:03.521 *********** 2025-06-02 17:43:59.298573 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:43:59.298583 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:43:59.298588 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:43:59.298592 | orchestrator | 2025-06-02 17:43:59.298597 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-06-02 17:43:59.298601 | orchestrator | Monday 02 June 2025 17:40:46 +0000 (0:00:01.608) 0:08:05.129 *********** 2025-06-02 17:43:59.298606 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.298610 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.298615 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.298619 | orchestrator | 2025-06-02 17:43:59.298624 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-06-02 17:43:59.298628 | orchestrator | Monday 02 June 2025 17:40:46 +0000 (0:00:00.332) 0:08:05.462 *********** 2025-06-02 17:43:59.298633 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.298637 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.298642 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.298646 | orchestrator | 2025-06-02 17:43:59.298651 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-06-02 17:43:59.298655 | orchestrator | Monday 02 June 2025 17:40:47 +0000 (0:00:00.299) 0:08:05.761 *********** 2025-06-02 17:43:59.298660 | orchestrator | ok: [testbed-node-3] => (item=5) 2025-06-02 17:43:59.298664 | orchestrator | ok: [testbed-node-4] => (item=4) 2025-06-02 17:43:59.298669 | orchestrator | ok: [testbed-node-5] => (item=1) 2025-06-02 17:43:59.298673 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-02 17:43:59.298678 | orchestrator | ok: [testbed-node-4] => (item=2) 2025-06-02 17:43:59.298682 | orchestrator | ok: [testbed-node-5] => (item=3) 2025-06-02 17:43:59.298687 | orchestrator | 2025-06-02 17:43:59.298691 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-06-02 17:43:59.298696 | orchestrator | Monday 02 June 2025 17:40:48 +0000 (0:00:01.284) 0:08:07.046 *********** 2025-06-02 17:43:59.298701 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-06-02 17:43:59.298705 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-06-02 17:43:59.298710 | orchestrator | changed: [testbed-node-5] => (item=1) 2025-06-02 17:43:59.298714 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-06-02 17:43:59.298719 | orchestrator | changed: [testbed-node-4] => (item=2) 2025-06-02 17:43:59.298723 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-06-02 17:43:59.298728 | orchestrator | 2025-06-02 17:43:59.298732 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-06-02 17:43:59.298737 | orchestrator | Monday 02 June 2025 17:40:50 +0000 (0:00:02.252) 0:08:09.299 *********** 2025-06-02 17:43:59.298741 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-06-02 17:43:59.298746 | orchestrator | changed: [testbed-node-5] => (item=1) 2025-06-02 17:43:59.298751 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-06-02 17:43:59.298755 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-06-02 17:43:59.298759 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-06-02 17:43:59.298764 | orchestrator | changed: [testbed-node-4] => (item=2) 2025-06-02 17:43:59.298768 | orchestrator | 2025-06-02 17:43:59.298773 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-06-02 17:43:59.298777 | orchestrator | Monday 02 June 2025 17:40:54 +0000 (0:00:03.475) 0:08:12.774 *********** 2025-06-02 17:43:59.298785 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.298790 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.298794 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-06-02 17:43:59.298798 | orchestrator | 2025-06-02 17:43:59.298803 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-06-02 17:43:59.298808 | orchestrator | Monday 02 June 2025 17:40:57 +0000 (0:00:03.024) 0:08:15.799 *********** 2025-06-02 17:43:59.298812 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.298817 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.298821 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-06-02 17:43:59.298828 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-06-02 17:43:59.298833 | orchestrator | 2025-06-02 17:43:59.298837 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-06-02 17:43:59.298842 | orchestrator | Monday 02 June 2025 17:41:10 +0000 (0:00:13.041) 0:08:28.841 *********** 2025-06-02 17:43:59.298846 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.298851 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.298855 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.298860 | orchestrator | 2025-06-02 17:43:59.298864 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-02 17:43:59.298869 | orchestrator | Monday 02 June 2025 17:41:11 +0000 (0:00:00.890) 0:08:29.732 *********** 2025-06-02 17:43:59.298873 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.298878 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.298882 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.298887 | orchestrator | 2025-06-02 17:43:59.298895 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-06-02 17:43:59.298900 | orchestrator | Monday 02 June 2025 17:41:11 +0000 (0:00:00.605) 0:08:30.338 *********** 2025-06-02 17:43:59.298905 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:43:59.298909 | orchestrator | 2025-06-02 17:43:59.298914 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-06-02 17:43:59.298918 | orchestrator | Monday 02 June 2025 17:41:12 +0000 (0:00:00.583) 0:08:30.922 *********** 2025-06-02 17:43:59.298923 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 17:43:59.298927 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 17:43:59.298931 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 17:43:59.298961 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.298967 | orchestrator | 2025-06-02 17:43:59.298971 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-06-02 17:43:59.298976 | orchestrator | Monday 02 June 2025 17:41:12 +0000 (0:00:00.389) 0:08:31.311 *********** 2025-06-02 17:43:59.298980 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.298985 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.298989 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.298994 | orchestrator | 2025-06-02 17:43:59.298998 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-06-02 17:43:59.299003 | orchestrator | Monday 02 June 2025 17:41:12 +0000 (0:00:00.302) 0:08:31.614 *********** 2025-06-02 17:43:59.299007 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.299012 | orchestrator | 2025-06-02 17:43:59.299050 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-06-02 17:43:59.299055 | orchestrator | Monday 02 June 2025 17:41:13 +0000 (0:00:00.213) 0:08:31.827 *********** 2025-06-02 17:43:59.299060 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.299064 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.299069 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.299073 | orchestrator | 2025-06-02 17:43:59.299078 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-06-02 17:43:59.299082 | orchestrator | Monday 02 June 2025 17:41:13 +0000 (0:00:00.588) 0:08:32.416 *********** 2025-06-02 17:43:59.299087 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.299091 | orchestrator | 2025-06-02 17:43:59.299096 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-06-02 17:43:59.299100 | orchestrator | Monday 02 June 2025 17:41:13 +0000 (0:00:00.243) 0:08:32.659 *********** 2025-06-02 17:43:59.299105 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.299109 | orchestrator | 2025-06-02 17:43:59.299114 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-06-02 17:43:59.299118 | orchestrator | Monday 02 June 2025 17:41:14 +0000 (0:00:00.227) 0:08:32.887 *********** 2025-06-02 17:43:59.299130 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.299135 | orchestrator | 2025-06-02 17:43:59.299139 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-06-02 17:43:59.299144 | orchestrator | Monday 02 June 2025 17:41:14 +0000 (0:00:00.115) 0:08:33.002 *********** 2025-06-02 17:43:59.299148 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.299153 | orchestrator | 2025-06-02 17:43:59.299157 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-06-02 17:43:59.299162 | orchestrator | Monday 02 June 2025 17:41:14 +0000 (0:00:00.263) 0:08:33.266 *********** 2025-06-02 17:43:59.299166 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.299171 | orchestrator | 2025-06-02 17:43:59.299175 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-06-02 17:43:59.299180 | orchestrator | Monday 02 June 2025 17:41:14 +0000 (0:00:00.212) 0:08:33.479 *********** 2025-06-02 17:43:59.299184 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 17:43:59.299189 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 17:43:59.299193 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 17:43:59.299198 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.299202 | orchestrator | 2025-06-02 17:43:59.299207 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-06-02 17:43:59.299212 | orchestrator | Monday 02 June 2025 17:41:15 +0000 (0:00:00.365) 0:08:33.844 *********** 2025-06-02 17:43:59.299219 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.299224 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.299229 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.299233 | orchestrator | 2025-06-02 17:43:59.299238 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-06-02 17:43:59.299242 | orchestrator | Monday 02 June 2025 17:41:15 +0000 (0:00:00.394) 0:08:34.239 *********** 2025-06-02 17:43:59.299246 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.299250 | orchestrator | 2025-06-02 17:43:59.299254 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-06-02 17:43:59.299258 | orchestrator | Monday 02 June 2025 17:41:16 +0000 (0:00:00.807) 0:08:35.047 *********** 2025-06-02 17:43:59.299262 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.299266 | orchestrator | 2025-06-02 17:43:59.299270 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-06-02 17:43:59.299274 | orchestrator | 2025-06-02 17:43:59.299278 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-02 17:43:59.299282 | orchestrator | Monday 02 June 2025 17:41:17 +0000 (0:00:00.711) 0:08:35.758 *********** 2025-06-02 17:43:59.299287 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:43:59.299293 | orchestrator | 2025-06-02 17:43:59.299297 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-02 17:43:59.299301 | orchestrator | Monday 02 June 2025 17:41:18 +0000 (0:00:01.238) 0:08:36.997 *********** 2025-06-02 17:43:59.299310 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:43:59.299314 | orchestrator | 2025-06-02 17:43:59.299318 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-02 17:43:59.299322 | orchestrator | Monday 02 June 2025 17:41:19 +0000 (0:00:01.267) 0:08:38.264 *********** 2025-06-02 17:43:59.299326 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.299330 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.299334 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.299339 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.299343 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.299351 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.299355 | orchestrator | 2025-06-02 17:43:59.299359 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-02 17:43:59.299364 | orchestrator | Monday 02 June 2025 17:41:20 +0000 (0:00:00.849) 0:08:39.114 *********** 2025-06-02 17:43:59.299368 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.299372 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.299376 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.299380 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.299384 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.299388 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.299392 | orchestrator | 2025-06-02 17:43:59.299396 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-02 17:43:59.299400 | orchestrator | Monday 02 June 2025 17:41:21 +0000 (0:00:00.984) 0:08:40.098 *********** 2025-06-02 17:43:59.299404 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.299408 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.299412 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.299416 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.299420 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.299425 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.299429 | orchestrator | 2025-06-02 17:43:59.299433 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-02 17:43:59.299437 | orchestrator | Monday 02 June 2025 17:41:22 +0000 (0:00:01.269) 0:08:41.367 *********** 2025-06-02 17:43:59.299441 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.299445 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.299449 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.299453 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.299457 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.299461 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.299465 | orchestrator | 2025-06-02 17:43:59.299469 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-02 17:43:59.299473 | orchestrator | Monday 02 June 2025 17:41:23 +0000 (0:00:01.013) 0:08:42.381 *********** 2025-06-02 17:43:59.299478 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.299482 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.299486 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.299490 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.299494 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.299498 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.299502 | orchestrator | 2025-06-02 17:43:59.299506 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-02 17:43:59.299510 | orchestrator | Monday 02 June 2025 17:41:24 +0000 (0:00:00.866) 0:08:43.248 *********** 2025-06-02 17:43:59.299514 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.299518 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.299522 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.299526 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.299530 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.299534 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.299539 | orchestrator | 2025-06-02 17:43:59.299543 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-02 17:43:59.299547 | orchestrator | Monday 02 June 2025 17:41:25 +0000 (0:00:00.616) 0:08:43.865 *********** 2025-06-02 17:43:59.299551 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.299555 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.299559 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.299563 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.299567 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.299571 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.299575 | orchestrator | 2025-06-02 17:43:59.299580 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-02 17:43:59.299584 | orchestrator | Monday 02 June 2025 17:41:26 +0000 (0:00:00.948) 0:08:44.813 *********** 2025-06-02 17:43:59.299594 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.299598 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.299602 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.299606 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.299611 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.299615 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.299619 | orchestrator | 2025-06-02 17:43:59.299623 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-02 17:43:59.299627 | orchestrator | Monday 02 June 2025 17:41:27 +0000 (0:00:01.162) 0:08:45.975 *********** 2025-06-02 17:43:59.299631 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.299635 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.299639 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.299643 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.299647 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.299651 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.299655 | orchestrator | 2025-06-02 17:43:59.299659 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-02 17:43:59.299663 | orchestrator | Monday 02 June 2025 17:41:28 +0000 (0:00:01.242) 0:08:47.218 *********** 2025-06-02 17:43:59.299667 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.299672 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.299676 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.299680 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.299684 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.299688 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.299692 | orchestrator | 2025-06-02 17:43:59.299696 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-02 17:43:59.299700 | orchestrator | Monday 02 June 2025 17:41:29 +0000 (0:00:00.611) 0:08:47.830 *********** 2025-06-02 17:43:59.299704 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.299711 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.299715 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.299719 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.299724 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.299728 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.299732 | orchestrator | 2025-06-02 17:43:59.299736 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-02 17:43:59.299740 | orchestrator | Monday 02 June 2025 17:41:29 +0000 (0:00:00.808) 0:08:48.638 *********** 2025-06-02 17:43:59.299744 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.299748 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.299752 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.299756 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.299760 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.299765 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.299769 | orchestrator | 2025-06-02 17:43:59.299773 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-02 17:43:59.299777 | orchestrator | Monday 02 June 2025 17:41:30 +0000 (0:00:00.631) 0:08:49.270 *********** 2025-06-02 17:43:59.299781 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.299785 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.299789 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.299793 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.299797 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.299801 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.299805 | orchestrator | 2025-06-02 17:43:59.299810 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-02 17:43:59.299814 | orchestrator | Monday 02 June 2025 17:41:31 +0000 (0:00:00.869) 0:08:50.140 *********** 2025-06-02 17:43:59.299818 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.299822 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.299826 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.299830 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.299838 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.299842 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.299846 | orchestrator | 2025-06-02 17:43:59.299850 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-02 17:43:59.299854 | orchestrator | Monday 02 June 2025 17:41:32 +0000 (0:00:00.636) 0:08:50.776 *********** 2025-06-02 17:43:59.299859 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.299863 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.299867 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.299871 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.299875 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.299879 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.299883 | orchestrator | 2025-06-02 17:43:59.299887 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-02 17:43:59.299891 | orchestrator | Monday 02 June 2025 17:41:32 +0000 (0:00:00.852) 0:08:51.629 *********** 2025-06-02 17:43:59.299895 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:59.299899 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:59.299903 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:59.299908 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.299912 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.299916 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.299920 | orchestrator | 2025-06-02 17:43:59.299924 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-02 17:43:59.299928 | orchestrator | Monday 02 June 2025 17:41:33 +0000 (0:00:00.573) 0:08:52.202 *********** 2025-06-02 17:43:59.299932 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.299936 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.299940 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.299944 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.299948 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.299952 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.299956 | orchestrator | 2025-06-02 17:43:59.299960 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-02 17:43:59.299965 | orchestrator | Monday 02 June 2025 17:41:34 +0000 (0:00:00.870) 0:08:53.072 *********** 2025-06-02 17:43:59.299969 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.299973 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.299977 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.299981 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.299985 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.299989 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.299993 | orchestrator | 2025-06-02 17:43:59.299997 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-02 17:43:59.300002 | orchestrator | Monday 02 June 2025 17:41:35 +0000 (0:00:00.657) 0:08:53.730 *********** 2025-06-02 17:43:59.300008 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.300012 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.300026 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.300030 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.300035 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.300039 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.300043 | orchestrator | 2025-06-02 17:43:59.300047 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-06-02 17:43:59.300051 | orchestrator | Monday 02 June 2025 17:41:36 +0000 (0:00:01.280) 0:08:55.011 *********** 2025-06-02 17:43:59.300055 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:59.300059 | orchestrator | 2025-06-02 17:43:59.300063 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-06-02 17:43:59.300067 | orchestrator | Monday 02 June 2025 17:41:40 +0000 (0:00:04.085) 0:08:59.096 *********** 2025-06-02 17:43:59.300072 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.300076 | orchestrator | 2025-06-02 17:43:59.300080 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-06-02 17:43:59.300084 | orchestrator | Monday 02 June 2025 17:41:42 +0000 (0:00:02.211) 0:09:01.307 *********** 2025-06-02 17:43:59.300094 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.300098 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:59.300102 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:59.300106 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:43:59.300110 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:43:59.300114 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:43:59.300118 | orchestrator | 2025-06-02 17:43:59.300123 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-06-02 17:43:59.300127 | orchestrator | Monday 02 June 2025 17:41:44 +0000 (0:00:01.726) 0:09:03.034 *********** 2025-06-02 17:43:59.300134 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:59.300138 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:59.300143 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:59.300147 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:43:59.300151 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:43:59.300155 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:43:59.300159 | orchestrator | 2025-06-02 17:43:59.300163 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-06-02 17:43:59.300168 | orchestrator | Monday 02 June 2025 17:41:45 +0000 (0:00:00.990) 0:09:04.025 *********** 2025-06-02 17:43:59.300172 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:43:59.300177 | orchestrator | 2025-06-02 17:43:59.300181 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-06-02 17:43:59.300185 | orchestrator | Monday 02 June 2025 17:41:46 +0000 (0:00:01.292) 0:09:05.318 *********** 2025-06-02 17:43:59.300189 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:59.300193 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:59.300197 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:59.300201 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:43:59.300205 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:43:59.300209 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:43:59.300213 | orchestrator | 2025-06-02 17:43:59.300217 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-06-02 17:43:59.300222 | orchestrator | Monday 02 June 2025 17:41:48 +0000 (0:00:01.811) 0:09:07.129 *********** 2025-06-02 17:43:59.300226 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:59.300230 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:59.300234 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:59.300238 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:43:59.300242 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:43:59.300246 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:43:59.300250 | orchestrator | 2025-06-02 17:43:59.300254 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-06-02 17:43:59.300258 | orchestrator | Monday 02 June 2025 17:41:51 +0000 (0:00:03.306) 0:09:10.436 *********** 2025-06-02 17:43:59.300263 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:43:59.300267 | orchestrator | 2025-06-02 17:43:59.300271 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-06-02 17:43:59.300275 | orchestrator | Monday 02 June 2025 17:41:53 +0000 (0:00:01.275) 0:09:11.712 *********** 2025-06-02 17:43:59.300279 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.300283 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.300288 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.300292 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.300296 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.300300 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.300304 | orchestrator | 2025-06-02 17:43:59.300308 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-06-02 17:43:59.300312 | orchestrator | Monday 02 June 2025 17:41:53 +0000 (0:00:00.882) 0:09:12.595 *********** 2025-06-02 17:43:59.300320 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:59.300324 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:59.300328 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:43:59.300332 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:59.300336 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:43:59.300340 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:43:59.300344 | orchestrator | 2025-06-02 17:43:59.300348 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-06-02 17:43:59.300352 | orchestrator | Monday 02 June 2025 17:41:56 +0000 (0:00:02.336) 0:09:14.931 *********** 2025-06-02 17:43:59.300356 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:59.300360 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:59.300365 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:59.300369 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.300373 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.300377 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.300381 | orchestrator | 2025-06-02 17:43:59.300385 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-06-02 17:43:59.300389 | orchestrator | 2025-06-02 17:43:59.300393 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-02 17:43:59.300400 | orchestrator | Monday 02 June 2025 17:41:57 +0000 (0:00:01.107) 0:09:16.039 *********** 2025-06-02 17:43:59.300404 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:43:59.300408 | orchestrator | 2025-06-02 17:43:59.300412 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-02 17:43:59.300416 | orchestrator | Monday 02 June 2025 17:41:57 +0000 (0:00:00.491) 0:09:16.530 *********** 2025-06-02 17:43:59.300420 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:43:59.300424 | orchestrator | 2025-06-02 17:43:59.300428 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-02 17:43:59.300432 | orchestrator | Monday 02 June 2025 17:41:58 +0000 (0:00:00.773) 0:09:17.304 *********** 2025-06-02 17:43:59.300436 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.300440 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.300444 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.300448 | orchestrator | 2025-06-02 17:43:59.300452 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-02 17:43:59.300457 | orchestrator | Monday 02 June 2025 17:41:58 +0000 (0:00:00.299) 0:09:17.604 *********** 2025-06-02 17:43:59.300461 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.300465 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.300469 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.300473 | orchestrator | 2025-06-02 17:43:59.300477 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-02 17:43:59.300484 | orchestrator | Monday 02 June 2025 17:41:59 +0000 (0:00:00.662) 0:09:18.266 *********** 2025-06-02 17:43:59.300488 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.300492 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.300496 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.300500 | orchestrator | 2025-06-02 17:43:59.300505 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-02 17:43:59.300509 | orchestrator | Monday 02 June 2025 17:42:00 +0000 (0:00:01.030) 0:09:19.296 *********** 2025-06-02 17:43:59.300513 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.300517 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.300521 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.300525 | orchestrator | 2025-06-02 17:43:59.300529 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-02 17:43:59.300533 | orchestrator | Monday 02 June 2025 17:42:01 +0000 (0:00:00.700) 0:09:19.997 *********** 2025-06-02 17:43:59.300537 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.300544 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.300548 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.300552 | orchestrator | 2025-06-02 17:43:59.300556 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-02 17:43:59.300560 | orchestrator | Monday 02 June 2025 17:42:01 +0000 (0:00:00.317) 0:09:20.314 *********** 2025-06-02 17:43:59.300564 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.300569 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.300573 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.300577 | orchestrator | 2025-06-02 17:43:59.300581 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-02 17:43:59.300585 | orchestrator | Monday 02 June 2025 17:42:01 +0000 (0:00:00.354) 0:09:20.668 *********** 2025-06-02 17:43:59.300589 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.300593 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.300597 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.300601 | orchestrator | 2025-06-02 17:43:59.300605 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-02 17:43:59.300609 | orchestrator | Monday 02 June 2025 17:42:02 +0000 (0:00:00.594) 0:09:21.263 *********** 2025-06-02 17:43:59.300614 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.300618 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.300622 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.300626 | orchestrator | 2025-06-02 17:43:59.300630 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-02 17:43:59.300634 | orchestrator | Monday 02 June 2025 17:42:03 +0000 (0:00:00.735) 0:09:21.999 *********** 2025-06-02 17:43:59.300638 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.300642 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.300646 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.300650 | orchestrator | 2025-06-02 17:43:59.300654 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-02 17:43:59.300659 | orchestrator | Monday 02 June 2025 17:42:04 +0000 (0:00:00.782) 0:09:22.782 *********** 2025-06-02 17:43:59.300663 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.300667 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.300671 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.300675 | orchestrator | 2025-06-02 17:43:59.300679 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-02 17:43:59.300683 | orchestrator | Monday 02 June 2025 17:42:04 +0000 (0:00:00.313) 0:09:23.096 *********** 2025-06-02 17:43:59.300687 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.300691 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.300695 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.300699 | orchestrator | 2025-06-02 17:43:59.300703 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-02 17:43:59.300707 | orchestrator | Monday 02 June 2025 17:42:04 +0000 (0:00:00.580) 0:09:23.676 *********** 2025-06-02 17:43:59.300712 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.300716 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.300720 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.300724 | orchestrator | 2025-06-02 17:43:59.300728 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-02 17:43:59.300732 | orchestrator | Monday 02 June 2025 17:42:05 +0000 (0:00:00.340) 0:09:24.017 *********** 2025-06-02 17:43:59.300736 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.300740 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.300744 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.300748 | orchestrator | 2025-06-02 17:43:59.300753 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-02 17:43:59.300757 | orchestrator | Monday 02 June 2025 17:42:05 +0000 (0:00:00.432) 0:09:24.450 *********** 2025-06-02 17:43:59.300763 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.300767 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.300771 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.300779 | orchestrator | 2025-06-02 17:43:59.300783 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-02 17:43:59.300787 | orchestrator | Monday 02 June 2025 17:42:06 +0000 (0:00:00.348) 0:09:24.798 *********** 2025-06-02 17:43:59.300792 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.300796 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.300800 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.300804 | orchestrator | 2025-06-02 17:43:59.300808 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-02 17:43:59.300812 | orchestrator | Monday 02 June 2025 17:42:06 +0000 (0:00:00.554) 0:09:25.353 *********** 2025-06-02 17:43:59.300816 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.300820 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.300824 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.300828 | orchestrator | 2025-06-02 17:43:59.300832 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-02 17:43:59.300836 | orchestrator | Monday 02 June 2025 17:42:06 +0000 (0:00:00.316) 0:09:25.669 *********** 2025-06-02 17:43:59.300840 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.300844 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.300848 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.300852 | orchestrator | 2025-06-02 17:43:59.300856 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-02 17:43:59.300860 | orchestrator | Monday 02 June 2025 17:42:07 +0000 (0:00:00.310) 0:09:25.979 *********** 2025-06-02 17:43:59.300865 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.300872 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.300876 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.300881 | orchestrator | 2025-06-02 17:43:59.300885 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-02 17:43:59.300889 | orchestrator | Monday 02 June 2025 17:42:07 +0000 (0:00:00.321) 0:09:26.300 *********** 2025-06-02 17:43:59.300893 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.300897 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.300901 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.300905 | orchestrator | 2025-06-02 17:43:59.300909 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-06-02 17:43:59.300914 | orchestrator | Monday 02 June 2025 17:42:08 +0000 (0:00:00.843) 0:09:27.144 *********** 2025-06-02 17:43:59.300918 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.300922 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.300926 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-06-02 17:43:59.300930 | orchestrator | 2025-06-02 17:43:59.300934 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-06-02 17:43:59.300938 | orchestrator | Monday 02 June 2025 17:42:08 +0000 (0:00:00.395) 0:09:27.540 *********** 2025-06-02 17:43:59.300942 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-02 17:43:59.300947 | orchestrator | 2025-06-02 17:43:59.300951 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-06-02 17:43:59.300955 | orchestrator | Monday 02 June 2025 17:42:10 +0000 (0:00:02.085) 0:09:29.625 *********** 2025-06-02 17:43:59.300960 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-06-02 17:43:59.300965 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.300969 | orchestrator | 2025-06-02 17:43:59.300974 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-06-02 17:43:59.300978 | orchestrator | Monday 02 June 2025 17:42:11 +0000 (0:00:00.232) 0:09:29.858 *********** 2025-06-02 17:43:59.300983 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-02 17:43:59.300997 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-02 17:43:59.301001 | orchestrator | 2025-06-02 17:43:59.301005 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-06-02 17:43:59.301009 | orchestrator | Monday 02 June 2025 17:42:19 +0000 (0:00:08.434) 0:09:38.292 *********** 2025-06-02 17:43:59.301013 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-02 17:43:59.301029 | orchestrator | 2025-06-02 17:43:59.301033 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-06-02 17:43:59.301037 | orchestrator | Monday 02 June 2025 17:42:23 +0000 (0:00:03.724) 0:09:42.016 *********** 2025-06-02 17:43:59.301041 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:43:59.301045 | orchestrator | 2025-06-02 17:43:59.301049 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-06-02 17:43:59.301054 | orchestrator | Monday 02 June 2025 17:42:23 +0000 (0:00:00.621) 0:09:42.637 *********** 2025-06-02 17:43:59.301058 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-02 17:43:59.301062 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-02 17:43:59.301066 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-02 17:43:59.301073 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-06-02 17:43:59.301077 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-06-02 17:43:59.301083 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-06-02 17:43:59.301090 | orchestrator | 2025-06-02 17:43:59.301096 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-06-02 17:43:59.301103 | orchestrator | Monday 02 June 2025 17:42:25 +0000 (0:00:01.195) 0:09:43.833 *********** 2025-06-02 17:43:59.301112 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:43:59.301122 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-02 17:43:59.301132 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-02 17:43:59.301138 | orchestrator | 2025-06-02 17:43:59.301144 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-06-02 17:43:59.301151 | orchestrator | Monday 02 June 2025 17:42:27 +0000 (0:00:02.822) 0:09:46.656 *********** 2025-06-02 17:43:59.301158 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-02 17:43:59.301164 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-02 17:43:59.301170 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:43:59.301177 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-02 17:43:59.301183 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-02 17:43:59.301189 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:43:59.301196 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-02 17:43:59.301204 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-02 17:43:59.301214 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:43:59.301220 | orchestrator | 2025-06-02 17:43:59.301227 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-06-02 17:43:59.301233 | orchestrator | Monday 02 June 2025 17:42:29 +0000 (0:00:01.676) 0:09:48.332 *********** 2025-06-02 17:43:59.301239 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:43:59.301247 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:43:59.301253 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:43:59.301260 | orchestrator | 2025-06-02 17:43:59.301266 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-06-02 17:43:59.301279 | orchestrator | Monday 02 June 2025 17:42:32 +0000 (0:00:02.754) 0:09:51.086 *********** 2025-06-02 17:43:59.301285 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.301291 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.301298 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.301305 | orchestrator | 2025-06-02 17:43:59.301312 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-06-02 17:43:59.301319 | orchestrator | Monday 02 June 2025 17:42:32 +0000 (0:00:00.329) 0:09:51.416 *********** 2025-06-02 17:43:59.301324 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:43:59.301328 | orchestrator | 2025-06-02 17:43:59.301332 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-06-02 17:43:59.301336 | orchestrator | Monday 02 June 2025 17:42:33 +0000 (0:00:00.828) 0:09:52.245 *********** 2025-06-02 17:43:59.301340 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:43:59.301344 | orchestrator | 2025-06-02 17:43:59.301349 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-06-02 17:43:59.301353 | orchestrator | Monday 02 June 2025 17:42:34 +0000 (0:00:00.533) 0:09:52.778 *********** 2025-06-02 17:43:59.301357 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:43:59.301361 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:43:59.301365 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:43:59.301369 | orchestrator | 2025-06-02 17:43:59.301373 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-06-02 17:43:59.301377 | orchestrator | Monday 02 June 2025 17:42:35 +0000 (0:00:01.257) 0:09:54.036 *********** 2025-06-02 17:43:59.301381 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:43:59.301386 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:43:59.301390 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:43:59.301394 | orchestrator | 2025-06-02 17:43:59.301398 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-06-02 17:43:59.301402 | orchestrator | Monday 02 June 2025 17:42:36 +0000 (0:00:01.453) 0:09:55.489 *********** 2025-06-02 17:43:59.301406 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:43:59.301410 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:43:59.301414 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:43:59.301418 | orchestrator | 2025-06-02 17:43:59.301422 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-06-02 17:43:59.301426 | orchestrator | Monday 02 June 2025 17:42:38 +0000 (0:00:01.732) 0:09:57.222 *********** 2025-06-02 17:43:59.301430 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:43:59.301434 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:43:59.301439 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:43:59.301443 | orchestrator | 2025-06-02 17:43:59.301447 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-06-02 17:43:59.301451 | orchestrator | Monday 02 June 2025 17:42:40 +0000 (0:00:02.015) 0:09:59.238 *********** 2025-06-02 17:43:59.301455 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.301459 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.301463 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.301468 | orchestrator | 2025-06-02 17:43:59.301472 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-02 17:43:59.301476 | orchestrator | Monday 02 June 2025 17:42:41 +0000 (0:00:01.435) 0:10:00.673 *********** 2025-06-02 17:43:59.301480 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:43:59.301484 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:43:59.301488 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:43:59.301492 | orchestrator | 2025-06-02 17:43:59.301496 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-06-02 17:43:59.301500 | orchestrator | Monday 02 June 2025 17:42:42 +0000 (0:00:00.706) 0:10:01.380 *********** 2025-06-02 17:43:59.301511 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:43:59.301515 | orchestrator | 2025-06-02 17:43:59.301519 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-06-02 17:43:59.301523 | orchestrator | Monday 02 June 2025 17:42:43 +0000 (0:00:00.832) 0:10:02.213 *********** 2025-06-02 17:43:59.301527 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.301531 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.301535 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.301540 | orchestrator | 2025-06-02 17:43:59.301544 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-06-02 17:43:59.301548 | orchestrator | Monday 02 June 2025 17:42:43 +0000 (0:00:00.334) 0:10:02.547 *********** 2025-06-02 17:43:59.301552 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:43:59.301556 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:43:59.301560 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:43:59.301564 | orchestrator | 2025-06-02 17:43:59.301568 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-06-02 17:43:59.301572 | orchestrator | Monday 02 June 2025 17:42:45 +0000 (0:00:01.258) 0:10:03.805 *********** 2025-06-02 17:43:59.301576 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 17:43:59.301581 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 17:43:59.301585 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 17:43:59.301589 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.301593 | orchestrator | 2025-06-02 17:43:59.301597 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-06-02 17:43:59.301604 | orchestrator | Monday 02 June 2025 17:42:46 +0000 (0:00:00.926) 0:10:04.731 *********** 2025-06-02 17:43:59.301608 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.301612 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.301616 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.301621 | orchestrator | 2025-06-02 17:43:59.301625 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-06-02 17:43:59.301629 | orchestrator | 2025-06-02 17:43:59.301633 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-02 17:43:59.301637 | orchestrator | Monday 02 June 2025 17:42:46 +0000 (0:00:00.819) 0:10:05.551 *********** 2025-06-02 17:43:59.301641 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:43:59.301645 | orchestrator | 2025-06-02 17:43:59.301649 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-02 17:43:59.301653 | orchestrator | Monday 02 June 2025 17:42:47 +0000 (0:00:00.524) 0:10:06.075 *********** 2025-06-02 17:43:59.301658 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:43:59.301662 | orchestrator | 2025-06-02 17:43:59.301666 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-02 17:43:59.301670 | orchestrator | Monday 02 June 2025 17:42:48 +0000 (0:00:00.815) 0:10:06.890 *********** 2025-06-02 17:43:59.301674 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.301678 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.301682 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.301686 | orchestrator | 2025-06-02 17:43:59.301690 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-02 17:43:59.301694 | orchestrator | Monday 02 June 2025 17:42:48 +0000 (0:00:00.323) 0:10:07.214 *********** 2025-06-02 17:43:59.301699 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.301703 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.301707 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.301711 | orchestrator | 2025-06-02 17:43:59.301715 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-02 17:43:59.301719 | orchestrator | Monday 02 June 2025 17:42:49 +0000 (0:00:00.689) 0:10:07.903 *********** 2025-06-02 17:43:59.301727 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.301731 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.301735 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.301739 | orchestrator | 2025-06-02 17:43:59.301743 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-02 17:43:59.301747 | orchestrator | Monday 02 June 2025 17:42:49 +0000 (0:00:00.706) 0:10:08.609 *********** 2025-06-02 17:43:59.301751 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.301755 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.301759 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.301764 | orchestrator | 2025-06-02 17:43:59.301768 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-02 17:43:59.301772 | orchestrator | Monday 02 June 2025 17:42:50 +0000 (0:00:01.059) 0:10:09.669 *********** 2025-06-02 17:43:59.301776 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.301780 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.301784 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.301788 | orchestrator | 2025-06-02 17:43:59.301792 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-02 17:43:59.301797 | orchestrator | Monday 02 June 2025 17:42:51 +0000 (0:00:00.338) 0:10:10.007 *********** 2025-06-02 17:43:59.301801 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.301805 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.301809 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.301813 | orchestrator | 2025-06-02 17:43:59.301817 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-02 17:43:59.301821 | orchestrator | Monday 02 June 2025 17:42:51 +0000 (0:00:00.319) 0:10:10.327 *********** 2025-06-02 17:43:59.301825 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.301829 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.301834 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.301838 | orchestrator | 2025-06-02 17:43:59.301842 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-02 17:43:59.301846 | orchestrator | Monday 02 June 2025 17:42:51 +0000 (0:00:00.302) 0:10:10.630 *********** 2025-06-02 17:43:59.301850 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.301854 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.301861 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.301865 | orchestrator | 2025-06-02 17:43:59.301869 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-02 17:43:59.301873 | orchestrator | Monday 02 June 2025 17:42:52 +0000 (0:00:01.029) 0:10:11.659 *********** 2025-06-02 17:43:59.301877 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.301881 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.301885 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.301889 | orchestrator | 2025-06-02 17:43:59.301894 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-02 17:43:59.301898 | orchestrator | Monday 02 June 2025 17:42:53 +0000 (0:00:00.719) 0:10:12.379 *********** 2025-06-02 17:43:59.301902 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.301906 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.301910 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.301914 | orchestrator | 2025-06-02 17:43:59.301918 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-02 17:43:59.301922 | orchestrator | Monday 02 June 2025 17:42:53 +0000 (0:00:00.317) 0:10:12.696 *********** 2025-06-02 17:43:59.301927 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.301931 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.301935 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.301939 | orchestrator | 2025-06-02 17:43:59.301943 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-02 17:43:59.301947 | orchestrator | Monday 02 June 2025 17:42:54 +0000 (0:00:00.304) 0:10:13.000 *********** 2025-06-02 17:43:59.301951 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.301959 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.301963 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.301967 | orchestrator | 2025-06-02 17:43:59.301974 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-02 17:43:59.301978 | orchestrator | Monday 02 June 2025 17:42:54 +0000 (0:00:00.607) 0:10:13.608 *********** 2025-06-02 17:43:59.301982 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.301986 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.301990 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.301994 | orchestrator | 2025-06-02 17:43:59.301998 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-02 17:43:59.302003 | orchestrator | Monday 02 June 2025 17:42:55 +0000 (0:00:00.322) 0:10:13.930 *********** 2025-06-02 17:43:59.302007 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.302073 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.302081 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.302085 | orchestrator | 2025-06-02 17:43:59.302089 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-02 17:43:59.302094 | orchestrator | Monday 02 June 2025 17:42:55 +0000 (0:00:00.366) 0:10:14.296 *********** 2025-06-02 17:43:59.302098 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.302102 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.302106 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.302110 | orchestrator | 2025-06-02 17:43:59.302114 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-02 17:43:59.302119 | orchestrator | Monday 02 June 2025 17:42:55 +0000 (0:00:00.293) 0:10:14.589 *********** 2025-06-02 17:43:59.302123 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.302127 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.302131 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.302135 | orchestrator | 2025-06-02 17:43:59.302139 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-02 17:43:59.302143 | orchestrator | Monday 02 June 2025 17:42:56 +0000 (0:00:00.605) 0:10:15.195 *********** 2025-06-02 17:43:59.302147 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.302151 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.302155 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.302159 | orchestrator | 2025-06-02 17:43:59.302163 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-02 17:43:59.302168 | orchestrator | Monday 02 June 2025 17:42:56 +0000 (0:00:00.313) 0:10:15.509 *********** 2025-06-02 17:43:59.302172 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.302176 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.302180 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.302184 | orchestrator | 2025-06-02 17:43:59.302188 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-02 17:43:59.302192 | orchestrator | Monday 02 June 2025 17:42:57 +0000 (0:00:00.332) 0:10:15.841 *********** 2025-06-02 17:43:59.302196 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.302200 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.302204 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.302208 | orchestrator | 2025-06-02 17:43:59.302213 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-06-02 17:43:59.302217 | orchestrator | Monday 02 June 2025 17:42:57 +0000 (0:00:00.796) 0:10:16.638 *********** 2025-06-02 17:43:59.302221 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:43:59.302225 | orchestrator | 2025-06-02 17:43:59.302229 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-06-02 17:43:59.302233 | orchestrator | Monday 02 June 2025 17:42:58 +0000 (0:00:00.554) 0:10:17.193 *********** 2025-06-02 17:43:59.302237 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:43:59.302241 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-02 17:43:59.302246 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-02 17:43:59.302255 | orchestrator | 2025-06-02 17:43:59.302259 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-06-02 17:43:59.302263 | orchestrator | Monday 02 June 2025 17:43:00 +0000 (0:00:02.064) 0:10:19.258 *********** 2025-06-02 17:43:59.302269 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-02 17:43:59.302276 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-02 17:43:59.302286 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-02 17:43:59.302294 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-02 17:43:59.302300 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:43:59.302306 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:43:59.302319 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-02 17:43:59.302326 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-02 17:43:59.302333 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:43:59.302340 | orchestrator | 2025-06-02 17:43:59.302347 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-06-02 17:43:59.302354 | orchestrator | Monday 02 June 2025 17:43:02 +0000 (0:00:01.460) 0:10:20.718 *********** 2025-06-02 17:43:59.302360 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.302364 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.302368 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.302372 | orchestrator | 2025-06-02 17:43:59.302376 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-06-02 17:43:59.302380 | orchestrator | Monday 02 June 2025 17:43:02 +0000 (0:00:00.310) 0:10:21.029 *********** 2025-06-02 17:43:59.302385 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:43:59.302389 | orchestrator | 2025-06-02 17:43:59.302393 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-06-02 17:43:59.302397 | orchestrator | Monday 02 June 2025 17:43:02 +0000 (0:00:00.540) 0:10:21.569 *********** 2025-06-02 17:43:59.302401 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-02 17:43:59.302410 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-02 17:43:59.302414 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-02 17:43:59.302418 | orchestrator | 2025-06-02 17:43:59.302423 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-06-02 17:43:59.302427 | orchestrator | Monday 02 June 2025 17:43:04 +0000 (0:00:01.146) 0:10:22.716 *********** 2025-06-02 17:43:59.302431 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:43:59.302435 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-02 17:43:59.302439 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:43:59.302443 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-02 17:43:59.302447 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:43:59.302452 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-02 17:43:59.302456 | orchestrator | 2025-06-02 17:43:59.302460 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-06-02 17:43:59.302464 | orchestrator | Monday 02 June 2025 17:43:08 +0000 (0:00:04.567) 0:10:27.283 *********** 2025-06-02 17:43:59.302468 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:43:59.302479 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-02 17:43:59.302483 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:43:59.302487 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-02 17:43:59.302491 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:43:59.302495 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-02 17:43:59.302499 | orchestrator | 2025-06-02 17:43:59.302503 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-06-02 17:43:59.302507 | orchestrator | Monday 02 June 2025 17:43:10 +0000 (0:00:02.245) 0:10:29.529 *********** 2025-06-02 17:43:59.302511 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-02 17:43:59.302516 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:43:59.302520 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-02 17:43:59.302524 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:43:59.302528 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-02 17:43:59.302532 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:43:59.302536 | orchestrator | 2025-06-02 17:43:59.302540 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-06-02 17:43:59.302544 | orchestrator | Monday 02 June 2025 17:43:12 +0000 (0:00:01.232) 0:10:30.762 *********** 2025-06-02 17:43:59.302548 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-06-02 17:43:59.302552 | orchestrator | 2025-06-02 17:43:59.302557 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-06-02 17:43:59.302561 | orchestrator | Monday 02 June 2025 17:43:12 +0000 (0:00:00.240) 0:10:31.002 *********** 2025-06-02 17:43:59.302565 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 17:43:59.302570 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 17:43:59.302574 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 17:43:59.302581 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 17:43:59.302585 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 17:43:59.302588 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.302592 | orchestrator | 2025-06-02 17:43:59.302596 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-06-02 17:43:59.302600 | orchestrator | Monday 02 June 2025 17:43:13 +0000 (0:00:00.915) 0:10:31.917 *********** 2025-06-02 17:43:59.302603 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 17:43:59.302607 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 17:43:59.302611 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 17:43:59.302615 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 17:43:59.302618 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 17:43:59.302622 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.302626 | orchestrator | 2025-06-02 17:43:59.302632 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-06-02 17:43:59.302636 | orchestrator | Monday 02 June 2025 17:43:14 +0000 (0:00:01.157) 0:10:33.075 *********** 2025-06-02 17:43:59.302644 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-02 17:43:59.302648 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-02 17:43:59.302651 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-02 17:43:59.302655 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-02 17:43:59.302659 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-02 17:43:59.302663 | orchestrator | 2025-06-02 17:43:59.302667 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-06-02 17:43:59.302670 | orchestrator | Monday 02 June 2025 17:43:45 +0000 (0:00:30.977) 0:11:04.053 *********** 2025-06-02 17:43:59.302674 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.302678 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.302682 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.302686 | orchestrator | 2025-06-02 17:43:59.302689 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-06-02 17:43:59.302693 | orchestrator | Monday 02 June 2025 17:43:45 +0000 (0:00:00.352) 0:11:04.406 *********** 2025-06-02 17:43:59.302697 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.302701 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.302704 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.302708 | orchestrator | 2025-06-02 17:43:59.302712 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-06-02 17:43:59.302716 | orchestrator | Monday 02 June 2025 17:43:46 +0000 (0:00:00.331) 0:11:04.737 *********** 2025-06-02 17:43:59.302719 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:43:59.302723 | orchestrator | 2025-06-02 17:43:59.302727 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-06-02 17:43:59.302731 | orchestrator | Monday 02 June 2025 17:43:46 +0000 (0:00:00.825) 0:11:05.562 *********** 2025-06-02 17:43:59.302734 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:43:59.302738 | orchestrator | 2025-06-02 17:43:59.302742 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-06-02 17:43:59.302746 | orchestrator | Monday 02 June 2025 17:43:47 +0000 (0:00:00.574) 0:11:06.136 *********** 2025-06-02 17:43:59.302749 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:43:59.302753 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:43:59.302757 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:43:59.302761 | orchestrator | 2025-06-02 17:43:59.302764 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-06-02 17:43:59.302768 | orchestrator | Monday 02 June 2025 17:43:48 +0000 (0:00:01.306) 0:11:07.443 *********** 2025-06-02 17:43:59.302772 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:43:59.302776 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:43:59.302779 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:43:59.302783 | orchestrator | 2025-06-02 17:43:59.302787 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-06-02 17:43:59.302791 | orchestrator | Monday 02 June 2025 17:43:50 +0000 (0:00:01.446) 0:11:08.889 *********** 2025-06-02 17:43:59.302794 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:43:59.302798 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:43:59.302802 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:43:59.302809 | orchestrator | 2025-06-02 17:43:59.302840 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-06-02 17:43:59.302844 | orchestrator | Monday 02 June 2025 17:43:51 +0000 (0:00:01.807) 0:11:10.697 *********** 2025-06-02 17:43:59.302848 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-02 17:43:59.302852 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-02 17:43:59.302856 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-02 17:43:59.302860 | orchestrator | 2025-06-02 17:43:59.302864 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-02 17:43:59.302867 | orchestrator | Monday 02 June 2025 17:43:54 +0000 (0:00:02.597) 0:11:13.295 *********** 2025-06-02 17:43:59.302871 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.302875 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.302879 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.302882 | orchestrator | 2025-06-02 17:43:59.302886 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-06-02 17:43:59.302890 | orchestrator | Monday 02 June 2025 17:43:54 +0000 (0:00:00.350) 0:11:13.645 *********** 2025-06-02 17:43:59.302894 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:43:59.302898 | orchestrator | 2025-06-02 17:43:59.302904 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-06-02 17:43:59.302908 | orchestrator | Monday 02 June 2025 17:43:55 +0000 (0:00:00.532) 0:11:14.178 *********** 2025-06-02 17:43:59.302912 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.302915 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.302919 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.302923 | orchestrator | 2025-06-02 17:43:59.302927 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-06-02 17:43:59.302931 | orchestrator | Monday 02 June 2025 17:43:56 +0000 (0:00:00.560) 0:11:14.739 *********** 2025-06-02 17:43:59.302934 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.302938 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:43:59.302942 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:43:59.302946 | orchestrator | 2025-06-02 17:43:59.302949 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-06-02 17:43:59.302953 | orchestrator | Monday 02 June 2025 17:43:56 +0000 (0:00:00.360) 0:11:15.100 *********** 2025-06-02 17:43:59.302957 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 17:43:59.302961 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 17:43:59.302964 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 17:43:59.302968 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:43:59.302972 | orchestrator | 2025-06-02 17:43:59.302976 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-06-02 17:43:59.302980 | orchestrator | Monday 02 June 2025 17:43:56 +0000 (0:00:00.597) 0:11:15.698 *********** 2025-06-02 17:43:59.302983 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:43:59.302987 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:43:59.302991 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:43:59.302995 | orchestrator | 2025-06-02 17:43:59.302999 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:43:59.303002 | orchestrator | testbed-node-0 : ok=141  changed=36  unreachable=0 failed=0 skipped=135  rescued=0 ignored=0 2025-06-02 17:43:59.303007 | orchestrator | testbed-node-1 : ok=127  changed=32  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-06-02 17:43:59.303011 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-06-02 17:43:59.303029 | orchestrator | testbed-node-3 : ok=186  changed=44  unreachable=0 failed=0 skipped=152  rescued=0 ignored=0 2025-06-02 17:43:59.303034 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-06-02 17:43:59.303038 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-06-02 17:43:59.303041 | orchestrator | 2025-06-02 17:43:59.303045 | orchestrator | 2025-06-02 17:43:59.303049 | orchestrator | 2025-06-02 17:43:59.303053 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:43:59.303056 | orchestrator | Monday 02 June 2025 17:43:57 +0000 (0:00:00.249) 0:11:15.947 *********** 2025-06-02 17:43:59.303060 | orchestrator | =============================================================================== 2025-06-02 17:43:59.303064 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 56.60s 2025-06-02 17:43:59.303067 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 43.05s 2025-06-02 17:43:59.303071 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.98s 2025-06-02 17:43:59.303075 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.35s 2025-06-02 17:43:59.303079 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.98s 2025-06-02 17:43:59.303085 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.26s 2025-06-02 17:43:59.303089 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 13.04s 2025-06-02 17:43:59.303092 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.53s 2025-06-02 17:43:59.303096 | orchestrator | ceph-mon : Fetch ceph initial keys ------------------------------------- 10.52s 2025-06-02 17:43:59.303100 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.43s 2025-06-02 17:43:59.303104 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.89s 2025-06-02 17:43:59.303107 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.55s 2025-06-02 17:43:59.303111 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.77s 2025-06-02 17:43:59.303115 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.57s 2025-06-02 17:43:59.303118 | orchestrator | ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created --- 4.22s 2025-06-02 17:43:59.303122 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.09s 2025-06-02 17:43:59.303126 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.72s 2025-06-02 17:43:59.303130 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.53s 2025-06-02 17:43:59.303133 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.48s 2025-06-02 17:43:59.303137 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.31s 2025-06-02 17:43:59.303144 | orchestrator | 2025-06-02 17:43:59 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:43:59.303148 | orchestrator | 2025-06-02 17:43:59 | INFO  | Task 43fe686c-359f-4025-9d70-d392ea31c5c1 is in state STARTED 2025-06-02 17:43:59.303152 | orchestrator | 2025-06-02 17:43:59 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:43:59.303156 | orchestrator | 2025-06-02 17:43:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:44:02.353553 | orchestrator | 2025-06-02 17:44:02 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:44:02.354353 | orchestrator | 2025-06-02 17:44:02 | INFO  | Task 43fe686c-359f-4025-9d70-d392ea31c5c1 is in state STARTED 2025-06-02 17:44:02.356465 | orchestrator | 2025-06-02 17:44:02 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:44:02.356537 | orchestrator | 2025-06-02 17:44:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:44:05.423415 | orchestrator | 2025-06-02 17:44:05 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:44:05.424909 | orchestrator | 2025-06-02 17:44:05 | INFO  | Task 43fe686c-359f-4025-9d70-d392ea31c5c1 is in state STARTED 2025-06-02 17:44:05.427069 | orchestrator | 2025-06-02 17:44:05 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:44:05.427311 | orchestrator | 2025-06-02 17:44:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:44:08.481057 | orchestrator | 2025-06-02 17:44:08 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:44:08.482468 | orchestrator | 2025-06-02 17:44:08 | INFO  | Task 43fe686c-359f-4025-9d70-d392ea31c5c1 is in state STARTED 2025-06-02 17:44:08.485106 | orchestrator | 2025-06-02 17:44:08 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:44:08.485152 | orchestrator | 2025-06-02 17:44:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:44:11.536343 | orchestrator | 2025-06-02 17:44:11 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:44:11.536816 | orchestrator | 2025-06-02 17:44:11 | INFO  | Task 43fe686c-359f-4025-9d70-d392ea31c5c1 is in state STARTED 2025-06-02 17:44:11.541056 | orchestrator | 2025-06-02 17:44:11 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:44:11.541276 | orchestrator | 2025-06-02 17:44:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:44:14.585925 | orchestrator | 2025-06-02 17:44:14 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:44:14.588116 | orchestrator | 2025-06-02 17:44:14 | INFO  | Task 43fe686c-359f-4025-9d70-d392ea31c5c1 is in state STARTED 2025-06-02 17:44:14.590653 | orchestrator | 2025-06-02 17:44:14 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:44:14.591375 | orchestrator | 2025-06-02 17:44:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:44:17.642148 | orchestrator | 2025-06-02 17:44:17 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:44:17.643311 | orchestrator | 2025-06-02 17:44:17 | INFO  | Task 43fe686c-359f-4025-9d70-d392ea31c5c1 is in state STARTED 2025-06-02 17:44:17.644711 | orchestrator | 2025-06-02 17:44:17 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:44:17.645047 | orchestrator | 2025-06-02 17:44:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:44:20.691354 | orchestrator | 2025-06-02 17:44:20 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:44:20.692861 | orchestrator | 2025-06-02 17:44:20 | INFO  | Task 43fe686c-359f-4025-9d70-d392ea31c5c1 is in state STARTED 2025-06-02 17:44:20.694489 | orchestrator | 2025-06-02 17:44:20 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:44:20.694526 | orchestrator | 2025-06-02 17:44:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:44:23.736628 | orchestrator | 2025-06-02 17:44:23 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:44:23.740093 | orchestrator | 2025-06-02 17:44:23 | INFO  | Task 43fe686c-359f-4025-9d70-d392ea31c5c1 is in state STARTED 2025-06-02 17:44:23.742235 | orchestrator | 2025-06-02 17:44:23 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:44:23.742299 | orchestrator | 2025-06-02 17:44:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:44:26.791436 | orchestrator | 2025-06-02 17:44:26 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:44:26.792873 | orchestrator | 2025-06-02 17:44:26 | INFO  | Task 43fe686c-359f-4025-9d70-d392ea31c5c1 is in state STARTED 2025-06-02 17:44:26.794370 | orchestrator | 2025-06-02 17:44:26 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:44:26.794426 | orchestrator | 2025-06-02 17:44:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:44:29.844168 | orchestrator | 2025-06-02 17:44:29 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:44:29.846249 | orchestrator | 2025-06-02 17:44:29 | INFO  | Task 43fe686c-359f-4025-9d70-d392ea31c5c1 is in state STARTED 2025-06-02 17:44:29.848412 | orchestrator | 2025-06-02 17:44:29 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:44:29.848482 | orchestrator | 2025-06-02 17:44:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:44:32.896493 | orchestrator | 2025-06-02 17:44:32 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:44:32.899678 | orchestrator | 2025-06-02 17:44:32 | INFO  | Task 43fe686c-359f-4025-9d70-d392ea31c5c1 is in state STARTED 2025-06-02 17:44:32.902364 | orchestrator | 2025-06-02 17:44:32 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:44:32.902451 | orchestrator | 2025-06-02 17:44:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:44:35.951498 | orchestrator | 2025-06-02 17:44:35 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:44:35.953336 | orchestrator | 2025-06-02 17:44:35 | INFO  | Task 43fe686c-359f-4025-9d70-d392ea31c5c1 is in state STARTED 2025-06-02 17:44:35.954818 | orchestrator | 2025-06-02 17:44:35 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:44:35.954858 | orchestrator | 2025-06-02 17:44:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:44:39.014223 | orchestrator | 2025-06-02 17:44:39 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:44:39.016631 | orchestrator | 2025-06-02 17:44:39 | INFO  | Task 43fe686c-359f-4025-9d70-d392ea31c5c1 is in state STARTED 2025-06-02 17:44:39.019231 | orchestrator | 2025-06-02 17:44:39 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:44:39.019429 | orchestrator | 2025-06-02 17:44:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:44:42.066539 | orchestrator | 2025-06-02 17:44:42 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:44:42.069322 | orchestrator | 2025-06-02 17:44:42 | INFO  | Task 43fe686c-359f-4025-9d70-d392ea31c5c1 is in state STARTED 2025-06-02 17:44:42.072184 | orchestrator | 2025-06-02 17:44:42 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:44:42.072255 | orchestrator | 2025-06-02 17:44:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:44:45.124385 | orchestrator | 2025-06-02 17:44:45 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:44:45.126230 | orchestrator | 2025-06-02 17:44:45 | INFO  | Task 43fe686c-359f-4025-9d70-d392ea31c5c1 is in state STARTED 2025-06-02 17:44:45.128878 | orchestrator | 2025-06-02 17:44:45 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:44:45.129043 | orchestrator | 2025-06-02 17:44:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:44:48.175337 | orchestrator | 2025-06-02 17:44:48 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:44:48.178698 | orchestrator | 2025-06-02 17:44:48 | INFO  | Task 43fe686c-359f-4025-9d70-d392ea31c5c1 is in state STARTED 2025-06-02 17:44:48.180100 | orchestrator | 2025-06-02 17:44:48 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:44:48.180229 | orchestrator | 2025-06-02 17:44:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:44:51.227503 | orchestrator | 2025-06-02 17:44:51 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:44:51.228511 | orchestrator | 2025-06-02 17:44:51 | INFO  | Task 43fe686c-359f-4025-9d70-d392ea31c5c1 is in state STARTED 2025-06-02 17:44:51.230426 | orchestrator | 2025-06-02 17:44:51 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:44:51.230463 | orchestrator | 2025-06-02 17:44:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:44:54.274685 | orchestrator | 2025-06-02 17:44:54 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:44:54.278255 | orchestrator | 2025-06-02 17:44:54 | INFO  | Task 43fe686c-359f-4025-9d70-d392ea31c5c1 is in state STARTED 2025-06-02 17:44:54.281243 | orchestrator | 2025-06-02 17:44:54 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:44:54.281323 | orchestrator | 2025-06-02 17:44:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:44:57.328497 | orchestrator | 2025-06-02 17:44:57 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state STARTED 2025-06-02 17:44:57.331434 | orchestrator | 2025-06-02 17:44:57 | INFO  | Task 43fe686c-359f-4025-9d70-d392ea31c5c1 is in state STARTED 2025-06-02 17:44:57.333106 | orchestrator | 2025-06-02 17:44:57 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:44:57.333173 | orchestrator | 2025-06-02 17:44:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:45:00.383826 | orchestrator | 2025-06-02 17:45:00.383965 | orchestrator | 2025-06-02 17:45:00.383976 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 17:45:00.383984 | orchestrator | 2025-06-02 17:45:00.383990 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 17:45:00.384067 | orchestrator | Monday 02 June 2025 17:41:57 +0000 (0:00:00.259) 0:00:00.259 *********** 2025-06-02 17:45:00.384077 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:00.384083 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:00.384087 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:00.384091 | orchestrator | 2025-06-02 17:45:00.384095 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 17:45:00.384099 | orchestrator | Monday 02 June 2025 17:41:57 +0000 (0:00:00.296) 0:00:00.556 *********** 2025-06-02 17:45:00.384103 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-06-02 17:45:00.384108 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-06-02 17:45:00.384112 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-06-02 17:45:00.384116 | orchestrator | 2025-06-02 17:45:00.384120 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-06-02 17:45:00.384124 | orchestrator | 2025-06-02 17:45:00.384128 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-02 17:45:00.384132 | orchestrator | Monday 02 June 2025 17:41:57 +0000 (0:00:00.427) 0:00:00.983 *********** 2025-06-02 17:45:00.384137 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:45:00.384159 | orchestrator | 2025-06-02 17:45:00.384163 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-06-02 17:45:00.384167 | orchestrator | Monday 02 June 2025 17:41:58 +0000 (0:00:00.518) 0:00:01.502 *********** 2025-06-02 17:45:00.384171 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-02 17:45:00.384175 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-02 17:45:00.384179 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-02 17:45:00.384182 | orchestrator | 2025-06-02 17:45:00.384186 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-06-02 17:45:00.384190 | orchestrator | Monday 02 June 2025 17:41:59 +0000 (0:00:00.718) 0:00:02.220 *********** 2025-06-02 17:45:00.384208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 17:45:00.384216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 17:45:00.384231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 17:45:00.384237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 17:45:00.384250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 17:45:00.384255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 17:45:00.384260 | orchestrator | 2025-06-02 17:45:00.384264 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-02 17:45:00.384267 | orchestrator | Monday 02 June 2025 17:42:00 +0000 (0:00:01.712) 0:00:03.933 *********** 2025-06-02 17:45:00.384271 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:45:00.384275 | orchestrator | 2025-06-02 17:45:00.384279 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-06-02 17:45:00.384283 | orchestrator | Monday 02 June 2025 17:42:01 +0000 (0:00:00.531) 0:00:04.465 *********** 2025-06-02 17:45:00.384292 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 17:45:00.384302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 17:45:00.384309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 17:45:00.384313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 17:45:00.384322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 17:45:00.384330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 17:45:00.384335 | orchestrator | 2025-06-02 17:45:00.384339 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-06-02 17:45:00.384343 | orchestrator | Monday 02 June 2025 17:42:04 +0000 (0:00:02.785) 0:00:07.250 *********** 2025-06-02 17:45:00.384349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 17:45:00.384353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 17:45:00.384360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 17:45:00.384370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 17:45:00.384374 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:00.384378 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:00.384384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 17:45:00.384388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 17:45:00.384393 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:00.384396 | orchestrator | 2025-06-02 17:45:00.384400 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-06-02 17:45:00.384404 | orchestrator | Monday 02 June 2025 17:42:05 +0000 (0:00:01.489) 0:00:08.740 *********** 2025-06-02 17:45:00.384411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 17:45:00.384419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 17:45:00.384423 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:00.384429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 17:45:00.384434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 17:45:00.384438 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:00.384444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 17:45:00.384453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 17:45:00.384458 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:00.384461 | orchestrator | 2025-06-02 17:45:00.384465 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-06-02 17:45:00.384469 | orchestrator | Monday 02 June 2025 17:42:06 +0000 (0:00:01.062) 0:00:09.802 *********** 2025-06-02 17:45:00.384475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 17:45:00.384479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 17:45:00.384483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 17:45:00.384496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 17:45:00.384501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 17:45:00.384508 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 17:45:00.384512 | orchestrator | 2025-06-02 17:45:00.384516 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-06-02 17:45:00.384520 | orchestrator | Monday 02 June 2025 17:42:09 +0000 (0:00:02.450) 0:00:12.253 *********** 2025-06-02 17:45:00.384523 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:45:00.384530 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:45:00.384534 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:45:00.384538 | orchestrator | 2025-06-02 17:45:00.384542 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-06-02 17:45:00.384545 | orchestrator | Monday 02 June 2025 17:42:12 +0000 (0:00:03.118) 0:00:15.372 *********** 2025-06-02 17:45:00.384549 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:45:00.384553 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:45:00.384556 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:45:00.384560 | orchestrator | 2025-06-02 17:45:00.384564 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-06-02 17:45:00.384568 | orchestrator | Monday 02 June 2025 17:42:13 +0000 (0:00:01.553) 0:00:16.925 *********** 2025-06-02 17:45:00.384575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/et2025-06-02 17:45:00 | INFO  | Task 90e03e9f-e6ce-4d32-b400-95438ff27ed8 is in state SUCCESS 2025-06-02 17:45:00.384581 | orchestrator | c/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 17:45:00.384586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 17:45:00.384593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 17:45:00.384597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 17:45:00.384608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 17:45:00.384613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 17:45:00.384617 | orchestrator | 2025-06-02 17:45:00.384621 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-02 17:45:00.384625 | orchestrator | Monday 02 June 2025 17:42:16 +0000 (0:00:02.324) 0:00:19.249 *********** 2025-06-02 17:45:00.384628 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:00.384632 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:00.384636 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:00.384639 | orchestrator | 2025-06-02 17:45:00.384643 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-06-02 17:45:00.384647 | orchestrator | Monday 02 June 2025 17:42:16 +0000 (0:00:00.291) 0:00:19.541 *********** 2025-06-02 17:45:00.384651 | orchestrator | 2025-06-02 17:45:00.384654 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-06-02 17:45:00.384658 | orchestrator | Monday 02 June 2025 17:42:16 +0000 (0:00:00.090) 0:00:19.631 *********** 2025-06-02 17:45:00.384662 | orchestrator | 2025-06-02 17:45:00.384668 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-06-02 17:45:00.384672 | orchestrator | Monday 02 June 2025 17:42:16 +0000 (0:00:00.069) 0:00:19.701 *********** 2025-06-02 17:45:00.384676 | orchestrator | 2025-06-02 17:45:00.384679 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-06-02 17:45:00.384686 | orchestrator | Monday 02 June 2025 17:42:16 +0000 (0:00:00.259) 0:00:19.960 *********** 2025-06-02 17:45:00.384690 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:00.384694 | orchestrator | 2025-06-02 17:45:00.384697 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-06-02 17:45:00.384701 | orchestrator | Monday 02 June 2025 17:42:17 +0000 (0:00:00.197) 0:00:20.158 *********** 2025-06-02 17:45:00.384705 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:00.384709 | orchestrator | 2025-06-02 17:45:00.384713 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-06-02 17:45:00.384718 | orchestrator | Monday 02 June 2025 17:42:17 +0000 (0:00:00.210) 0:00:20.368 *********** 2025-06-02 17:45:00.384722 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:45:00.384727 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:45:00.384731 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:45:00.384736 | orchestrator | 2025-06-02 17:45:00.384740 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-06-02 17:45:00.384745 | orchestrator | Monday 02 June 2025 17:43:27 +0000 (0:01:10.354) 0:01:30.723 *********** 2025-06-02 17:45:00.384761 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:45:00.384765 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:45:00.384776 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:45:00.384781 | orchestrator | 2025-06-02 17:45:00.384785 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-02 17:45:00.384790 | orchestrator | Monday 02 June 2025 17:44:46 +0000 (0:01:19.108) 0:02:49.832 *********** 2025-06-02 17:45:00.384794 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:45:00.384799 | orchestrator | 2025-06-02 17:45:00.384803 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-06-02 17:45:00.384808 | orchestrator | Monday 02 June 2025 17:44:47 +0000 (0:00:00.708) 0:02:50.540 *********** 2025-06-02 17:45:00.384812 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:00.384817 | orchestrator | 2025-06-02 17:45:00.384820 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-06-02 17:45:00.384824 | orchestrator | Monday 02 June 2025 17:44:49 +0000 (0:00:02.428) 0:02:52.969 *********** 2025-06-02 17:45:00.384828 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:00.384832 | orchestrator | 2025-06-02 17:45:00.384835 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-06-02 17:45:00.384839 | orchestrator | Monday 02 June 2025 17:44:52 +0000 (0:00:02.398) 0:02:55.368 *********** 2025-06-02 17:45:00.384843 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:45:00.384847 | orchestrator | 2025-06-02 17:45:00.384850 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-06-02 17:45:00.384856 | orchestrator | Monday 02 June 2025 17:44:55 +0000 (0:00:02.679) 0:02:58.048 *********** 2025-06-02 17:45:00.384860 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:45:00.384864 | orchestrator | 2025-06-02 17:45:00.384868 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:45:00.384872 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 17:45:00.384876 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 17:45:00.384880 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 17:45:00.384884 | orchestrator | 2025-06-02 17:45:00.384888 | orchestrator | 2025-06-02 17:45:00.384891 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:45:00.384895 | orchestrator | Monday 02 June 2025 17:44:57 +0000 (0:00:02.751) 0:03:00.799 *********** 2025-06-02 17:45:00.384902 | orchestrator | =============================================================================== 2025-06-02 17:45:00.384906 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 79.11s 2025-06-02 17:45:00.384910 | orchestrator | opensearch : Restart opensearch container ------------------------------ 70.35s 2025-06-02 17:45:00.384914 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.12s 2025-06-02 17:45:00.385015 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.79s 2025-06-02 17:45:00.385026 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.75s 2025-06-02 17:45:00.385032 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.68s 2025-06-02 17:45:00.385038 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.45s 2025-06-02 17:45:00.385044 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.43s 2025-06-02 17:45:00.385051 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.40s 2025-06-02 17:45:00.385056 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.32s 2025-06-02 17:45:00.385060 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.71s 2025-06-02 17:45:00.385064 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.55s 2025-06-02 17:45:00.385067 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.49s 2025-06-02 17:45:00.385075 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.06s 2025-06-02 17:45:00.385079 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.72s 2025-06-02 17:45:00.385083 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.71s 2025-06-02 17:45:00.385087 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.53s 2025-06-02 17:45:00.385090 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.52s 2025-06-02 17:45:00.385094 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.43s 2025-06-02 17:45:00.385098 | orchestrator | opensearch : Flush handlers --------------------------------------------- 0.42s 2025-06-02 17:45:00.385104 | orchestrator | 2025-06-02 17:45:00 | INFO  | Task 43fe686c-359f-4025-9d70-d392ea31c5c1 is in state STARTED 2025-06-02 17:45:00.386524 | orchestrator | 2025-06-02 17:45:00 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:45:00.386556 | orchestrator | 2025-06-02 17:45:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:45:03.432130 | orchestrator | 2025-06-02 17:45:03 | INFO  | Task 43fe686c-359f-4025-9d70-d392ea31c5c1 is in state STARTED 2025-06-02 17:45:03.434388 | orchestrator | 2025-06-02 17:45:03 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:45:03.434446 | orchestrator | 2025-06-02 17:45:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:45:06.483709 | orchestrator | 2025-06-02 17:45:06 | INFO  | Task 43fe686c-359f-4025-9d70-d392ea31c5c1 is in state STARTED 2025-06-02 17:45:06.486138 | orchestrator | 2025-06-02 17:45:06 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:45:06.486189 | orchestrator | 2025-06-02 17:45:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:45:09.529604 | orchestrator | 2025-06-02 17:45:09 | INFO  | Task 43fe686c-359f-4025-9d70-d392ea31c5c1 is in state STARTED 2025-06-02 17:45:09.531260 | orchestrator | 2025-06-02 17:45:09 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state STARTED 2025-06-02 17:45:09.531631 | orchestrator | 2025-06-02 17:45:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:45:12.583855 | orchestrator | 2025-06-02 17:45:12 | INFO  | Task 43fe686c-359f-4025-9d70-d392ea31c5c1 is in state STARTED 2025-06-02 17:45:12.589998 | orchestrator | 2025-06-02 17:45:12 | INFO  | Task 1c00a34f-00e0-44b7-be3d-b03d66f5aa60 is in state SUCCESS 2025-06-02 17:45:12.590108 | orchestrator | 2025-06-02 17:45:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:45:12.591515 | orchestrator | 2025-06-02 17:45:12.591556 | orchestrator | 2025-06-02 17:45:12.591567 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-06-02 17:45:12.591573 | orchestrator | 2025-06-02 17:45:12.591580 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-06-02 17:45:12.591588 | orchestrator | Monday 02 June 2025 17:41:56 +0000 (0:00:00.103) 0:00:00.103 *********** 2025-06-02 17:45:12.591595 | orchestrator | ok: [localhost] => { 2025-06-02 17:45:12.591603 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-06-02 17:45:12.591610 | orchestrator | } 2025-06-02 17:45:12.591617 | orchestrator | 2025-06-02 17:45:12.591623 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-06-02 17:45:12.591627 | orchestrator | Monday 02 June 2025 17:41:57 +0000 (0:00:00.046) 0:00:00.150 *********** 2025-06-02 17:45:12.591631 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-06-02 17:45:12.591637 | orchestrator | ...ignoring 2025-06-02 17:45:12.591642 | orchestrator | 2025-06-02 17:45:12.591646 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-06-02 17:45:12.591650 | orchestrator | Monday 02 June 2025 17:41:59 +0000 (0:00:02.844) 0:00:02.995 *********** 2025-06-02 17:45:12.591654 | orchestrator | skipping: [localhost] 2025-06-02 17:45:12.591658 | orchestrator | 2025-06-02 17:45:12.591661 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-06-02 17:45:12.591665 | orchestrator | Monday 02 June 2025 17:41:59 +0000 (0:00:00.052) 0:00:03.047 *********** 2025-06-02 17:45:12.591669 | orchestrator | ok: [localhost] 2025-06-02 17:45:12.591673 | orchestrator | 2025-06-02 17:45:12.591677 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 17:45:12.591680 | orchestrator | 2025-06-02 17:45:12.591684 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 17:45:12.591688 | orchestrator | Monday 02 June 2025 17:42:00 +0000 (0:00:00.165) 0:00:03.213 *********** 2025-06-02 17:45:12.591691 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:12.591695 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:12.591699 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:12.591703 | orchestrator | 2025-06-02 17:45:12.591706 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 17:45:12.591710 | orchestrator | Monday 02 June 2025 17:42:00 +0000 (0:00:00.332) 0:00:03.545 *********** 2025-06-02 17:45:12.591714 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-06-02 17:45:12.591718 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-06-02 17:45:12.591722 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-06-02 17:45:12.591725 | orchestrator | 2025-06-02 17:45:12.591729 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-06-02 17:45:12.591733 | orchestrator | 2025-06-02 17:45:12.591750 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-06-02 17:45:12.591754 | orchestrator | Monday 02 June 2025 17:42:01 +0000 (0:00:00.690) 0:00:04.236 *********** 2025-06-02 17:45:12.591757 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 17:45:12.591770 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-02 17:45:12.591779 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-02 17:45:12.591785 | orchestrator | 2025-06-02 17:45:12.591791 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-02 17:45:12.591797 | orchestrator | Monday 02 June 2025 17:42:01 +0000 (0:00:00.428) 0:00:04.665 *********** 2025-06-02 17:45:12.591820 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:45:12.591828 | orchestrator | 2025-06-02 17:45:12.591833 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-06-02 17:45:12.591839 | orchestrator | Monday 02 June 2025 17:42:02 +0000 (0:00:00.619) 0:00:05.284 *********** 2025-06-02 17:45:12.591865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 17:45:12.591879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 17:45:12.592045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 17:45:12.592061 | orchestrator | 2025-06-02 17:45:12.592073 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-06-02 17:45:12.592078 | orchestrator | Monday 02 June 2025 17:42:05 +0000 (0:00:03.295) 0:00:08.579 *********** 2025-06-02 17:45:12.592082 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:45:12.592098 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:12.592102 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:12.592105 | orchestrator | 2025-06-02 17:45:12.592109 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-06-02 17:45:12.592113 | orchestrator | Monday 02 June 2025 17:42:06 +0000 (0:00:00.949) 0:00:09.529 *********** 2025-06-02 17:45:12.592117 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:12.592121 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:12.592124 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:45:12.592129 | orchestrator | 2025-06-02 17:45:12.592136 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-06-02 17:45:12.592142 | orchestrator | Monday 02 June 2025 17:42:08 +0000 (0:00:01.715) 0:00:11.245 *********** 2025-06-02 17:45:12.592155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 17:45:12.592178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 17:45:12.592190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 17:45:12.592202 | orchestrator | 2025-06-02 17:45:12.592208 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-06-02 17:45:12.592214 | orchestrator | Monday 02 June 2025 17:42:12 +0000 (0:00:03.963) 0:00:15.208 *********** 2025-06-02 17:45:12.592220 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:12.592226 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:12.592232 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:45:12.592238 | orchestrator | 2025-06-02 17:45:12.592244 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-06-02 17:45:12.592251 | orchestrator | Monday 02 June 2025 17:42:13 +0000 (0:00:01.106) 0:00:16.315 *********** 2025-06-02 17:45:12.592257 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:45:12.592264 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:45:12.592270 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:45:12.592277 | orchestrator | 2025-06-02 17:45:12.592283 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-02 17:45:12.592290 | orchestrator | Monday 02 June 2025 17:42:17 +0000 (0:00:03.899) 0:00:20.215 *********** 2025-06-02 17:45:12.592296 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:45:12.592303 | orchestrator | 2025-06-02 17:45:12.592309 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-06-02 17:45:12.592316 | orchestrator | Monday 02 June 2025 17:42:17 +0000 (0:00:00.511) 0:00:20.726 *********** 2025-06-02 17:45:12.592331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 17:45:12.592338 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:12.592348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 17:45:12.592360 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:12.592373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 17:45:12.592379 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:12.592383 | orchestrator | 2025-06-02 17:45:12.592387 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-06-02 17:45:12.592391 | orchestrator | Monday 02 June 2025 17:42:21 +0000 (0:00:03.682) 0:00:24.409 *********** 2025-06-02 17:45:12.592398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 17:45:12.592405 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:12.592413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 17:45:12.592420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 17:45:12.592431 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:12.592435 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:12.592439 | orchestrator | 2025-06-02 17:45:12.592442 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-06-02 17:45:12.592446 | orchestrator | Monday 02 June 2025 17:42:24 +0000 (0:00:02.704) 0:00:27.114 *********** 2025-06-02 17:45:12.592453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 17:45:12.592457 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:12.592464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 17:45:12.592471 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:12.592475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 17:45:12.592479 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:12.592483 | orchestrator | 2025-06-02 17:45:12.592487 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-06-02 17:45:12.592491 | orchestrator | Monday 02 June 2025 17:42:27 +0000 (0:00:03.025) 0:00:30.139 *********** 2025-06-02 17:45:12.592497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 17:45:12.592509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 17:45:12.592517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 17:45:12.592524 | orchestrator | 2025-06-02 17:45:12.592528 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-06-02 17:45:12.592532 | orchestrator | Monday 02 June 2025 17:42:30 +0000 (0:00:03.657) 0:00:33.796 *********** 2025-06-02 17:45:12.592536 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:45:12.592539 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:45:12.592543 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:45:12.592547 | orchestrator | 2025-06-02 17:45:12.592550 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-06-02 17:45:12.592554 | orchestrator | Monday 02 June 2025 17:42:32 +0000 (0:00:01.472) 0:00:35.269 *********** 2025-06-02 17:45:12.592558 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:12.592562 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:12.592568 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:12.592572 | orchestrator | 2025-06-02 17:45:12.592576 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-06-02 17:45:12.592580 | orchestrator | Monday 02 June 2025 17:42:32 +0000 (0:00:00.384) 0:00:35.654 *********** 2025-06-02 17:45:12.592584 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:12.592587 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:12.592591 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:12.592595 | orchestrator | 2025-06-02 17:45:12.592599 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-06-02 17:45:12.592602 | orchestrator | Monday 02 June 2025 17:42:32 +0000 (0:00:00.351) 0:00:36.006 *********** 2025-06-02 17:45:12.592607 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-06-02 17:45:12.592611 | orchestrator | ...ignoring 2025-06-02 17:45:12.592615 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-06-02 17:45:12.592619 | orchestrator | ...ignoring 2025-06-02 17:45:12.592623 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-06-02 17:45:12.592627 | orchestrator | ...ignoring 2025-06-02 17:45:12.592630 | orchestrator | 2025-06-02 17:45:12.592634 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-06-02 17:45:12.592638 | orchestrator | Monday 02 June 2025 17:42:43 +0000 (0:00:10.982) 0:00:46.989 *********** 2025-06-02 17:45:12.592641 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:12.592645 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:12.592649 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:12.592653 | orchestrator | 2025-06-02 17:45:12.592656 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-06-02 17:45:12.592660 | orchestrator | Monday 02 June 2025 17:42:44 +0000 (0:00:00.716) 0:00:47.705 *********** 2025-06-02 17:45:12.592667 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:12.592671 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:12.592675 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:12.592679 | orchestrator | 2025-06-02 17:45:12.592683 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-06-02 17:45:12.592688 | orchestrator | Monday 02 June 2025 17:42:45 +0000 (0:00:00.431) 0:00:48.136 *********** 2025-06-02 17:45:12.592692 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:12.592696 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:12.592701 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:12.592705 | orchestrator | 2025-06-02 17:45:12.592709 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-06-02 17:45:12.592714 | orchestrator | Monday 02 June 2025 17:42:45 +0000 (0:00:00.403) 0:00:48.540 *********** 2025-06-02 17:45:12.592718 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:12.592722 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:12.592727 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:12.592731 | orchestrator | 2025-06-02 17:45:12.592735 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-06-02 17:45:12.592742 | orchestrator | Monday 02 June 2025 17:42:45 +0000 (0:00:00.520) 0:00:49.061 *********** 2025-06-02 17:45:12.592746 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:12.592751 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:12.592755 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:12.592759 | orchestrator | 2025-06-02 17:45:12.592764 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-06-02 17:45:12.592768 | orchestrator | Monday 02 June 2025 17:42:46 +0000 (0:00:00.666) 0:00:49.727 *********** 2025-06-02 17:45:12.592772 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:12.592777 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:12.592781 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:12.592785 | orchestrator | 2025-06-02 17:45:12.592790 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-02 17:45:12.592794 | orchestrator | Monday 02 June 2025 17:42:47 +0000 (0:00:00.404) 0:00:50.132 *********** 2025-06-02 17:45:12.592798 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:12.592803 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:12.592807 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-06-02 17:45:12.592812 | orchestrator | 2025-06-02 17:45:12.592816 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-06-02 17:45:12.592820 | orchestrator | Monday 02 June 2025 17:42:47 +0000 (0:00:00.370) 0:00:50.502 *********** 2025-06-02 17:45:12.592825 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:45:12.592829 | orchestrator | 2025-06-02 17:45:12.592833 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-06-02 17:45:12.592837 | orchestrator | Monday 02 June 2025 17:42:57 +0000 (0:00:10.250) 0:01:00.753 *********** 2025-06-02 17:45:12.592842 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:12.592846 | orchestrator | 2025-06-02 17:45:12.592850 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-02 17:45:12.592855 | orchestrator | Monday 02 June 2025 17:42:57 +0000 (0:00:00.121) 0:01:00.875 *********** 2025-06-02 17:45:12.592859 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:12.592863 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:12.592868 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:12.592872 | orchestrator | 2025-06-02 17:45:12.592876 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-06-02 17:45:12.592880 | orchestrator | Monday 02 June 2025 17:42:58 +0000 (0:00:01.044) 0:01:01.920 *********** 2025-06-02 17:45:12.592885 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:45:12.592889 | orchestrator | 2025-06-02 17:45:12.592894 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-06-02 17:45:12.592916 | orchestrator | Monday 02 June 2025 17:43:06 +0000 (0:00:08.001) 0:01:09.922 *********** 2025-06-02 17:45:12.592926 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:12.592936 | orchestrator | 2025-06-02 17:45:12.592943 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-06-02 17:45:12.592952 | orchestrator | Monday 02 June 2025 17:43:08 +0000 (0:00:01.623) 0:01:11.546 *********** 2025-06-02 17:45:12.592958 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:12.592963 | orchestrator | 2025-06-02 17:45:12.592969 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-06-02 17:45:12.592975 | orchestrator | Monday 02 June 2025 17:43:11 +0000 (0:00:02.686) 0:01:14.232 *********** 2025-06-02 17:45:12.592980 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:45:12.592986 | orchestrator | 2025-06-02 17:45:12.592992 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-06-02 17:45:12.592998 | orchestrator | Monday 02 June 2025 17:43:11 +0000 (0:00:00.179) 0:01:14.411 *********** 2025-06-02 17:45:12.593003 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:12.593009 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:12.593015 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:12.593021 | orchestrator | 2025-06-02 17:45:12.593026 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-06-02 17:45:12.593032 | orchestrator | Monday 02 June 2025 17:43:11 +0000 (0:00:00.541) 0:01:14.953 *********** 2025-06-02 17:45:12.593037 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:12.593043 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-06-02 17:45:12.593048 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:45:12.593054 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:45:12.593060 | orchestrator | 2025-06-02 17:45:12.593066 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-06-02 17:45:12.593072 | orchestrator | skipping: no hosts matched 2025-06-02 17:45:12.593078 | orchestrator | 2025-06-02 17:45:12.593084 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-02 17:45:12.593090 | orchestrator | 2025-06-02 17:45:12.593097 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-02 17:45:12.593104 | orchestrator | Monday 02 June 2025 17:43:12 +0000 (0:00:00.355) 0:01:15.309 *********** 2025-06-02 17:45:12.593110 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:45:12.593115 | orchestrator | 2025-06-02 17:45:12.593121 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-02 17:45:12.593127 | orchestrator | Monday 02 June 2025 17:43:32 +0000 (0:00:19.871) 0:01:35.180 *********** 2025-06-02 17:45:12.593134 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Wait for MariaDB service port liveness (10 retries left). 2025-06-02 17:45:12.593140 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:12.593146 | orchestrator | 2025-06-02 17:45:12.593153 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-02 17:45:12.593157 | orchestrator | Monday 02 June 2025 17:43:53 +0000 (0:00:21.111) 0:01:56.292 *********** 2025-06-02 17:45:12.593161 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:12.593165 | orchestrator | 2025-06-02 17:45:12.593168 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-02 17:45:12.593172 | orchestrator | 2025-06-02 17:45:12.593176 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-02 17:45:12.593180 | orchestrator | Monday 02 June 2025 17:43:55 +0000 (0:00:02.498) 0:01:58.791 *********** 2025-06-02 17:45:12.593183 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:45:12.593187 | orchestrator | 2025-06-02 17:45:12.593195 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-02 17:45:12.593199 | orchestrator | Monday 02 June 2025 17:44:21 +0000 (0:00:25.586) 0:02:24.378 *********** 2025-06-02 17:45:12.593203 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:12.593206 | orchestrator | 2025-06-02 17:45:12.593210 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-02 17:45:12.593219 | orchestrator | Monday 02 June 2025 17:44:36 +0000 (0:00:15.623) 0:02:40.001 *********** 2025-06-02 17:45:12.593223 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:12.593226 | orchestrator | 2025-06-02 17:45:12.593230 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-06-02 17:45:12.593234 | orchestrator | 2025-06-02 17:45:12.593238 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-02 17:45:12.593241 | orchestrator | Monday 02 June 2025 17:44:39 +0000 (0:00:02.754) 0:02:42.756 *********** 2025-06-02 17:45:12.593245 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:45:12.593249 | orchestrator | 2025-06-02 17:45:12.593253 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-02 17:45:12.593257 | orchestrator | Monday 02 June 2025 17:44:51 +0000 (0:00:11.968) 0:02:54.725 *********** 2025-06-02 17:45:12.593260 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:12.593264 | orchestrator | 2025-06-02 17:45:12.593268 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-02 17:45:12.593271 | orchestrator | Monday 02 June 2025 17:44:57 +0000 (0:00:05.560) 0:03:00.286 *********** 2025-06-02 17:45:12.593275 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:12.593279 | orchestrator | 2025-06-02 17:45:12.593283 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-06-02 17:45:12.593287 | orchestrator | 2025-06-02 17:45:12.593290 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-06-02 17:45:12.593294 | orchestrator | Monday 02 June 2025 17:44:59 +0000 (0:00:02.448) 0:03:02.735 *********** 2025-06-02 17:45:12.593298 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:45:12.593301 | orchestrator | 2025-06-02 17:45:12.593305 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-06-02 17:45:12.593309 | orchestrator | Monday 02 June 2025 17:45:00 +0000 (0:00:00.516) 0:03:03.251 *********** 2025-06-02 17:45:12.593312 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:12.593316 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:12.593320 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:45:12.593324 | orchestrator | 2025-06-02 17:45:12.593327 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-06-02 17:45:12.593331 | orchestrator | Monday 02 June 2025 17:45:02 +0000 (0:00:02.404) 0:03:05.655 *********** 2025-06-02 17:45:12.593335 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:12.593339 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:12.593342 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:45:12.593346 | orchestrator | 2025-06-02 17:45:12.593353 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-06-02 17:45:12.593357 | orchestrator | Monday 02 June 2025 17:45:04 +0000 (0:00:02.134) 0:03:07.790 *********** 2025-06-02 17:45:12.593360 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:12.593364 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:12.593368 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:45:12.593372 | orchestrator | 2025-06-02 17:45:12.593375 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-06-02 17:45:12.593379 | orchestrator | Monday 02 June 2025 17:45:06 +0000 (0:00:02.040) 0:03:09.831 *********** 2025-06-02 17:45:12.593383 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:12.593387 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:12.593390 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:45:12.593394 | orchestrator | 2025-06-02 17:45:12.593398 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-06-02 17:45:12.593401 | orchestrator | Monday 02 June 2025 17:45:08 +0000 (0:00:02.048) 0:03:11.879 *********** 2025-06-02 17:45:12.593405 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:12.593409 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:12.593413 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:12.593416 | orchestrator | 2025-06-02 17:45:12.593420 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-06-02 17:45:12.593428 | orchestrator | Monday 02 June 2025 17:45:11 +0000 (0:00:02.896) 0:03:14.776 *********** 2025-06-02 17:45:12.593431 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:12.593435 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:12.593439 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:12.593443 | orchestrator | 2025-06-02 17:45:12.593446 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:45:12.593450 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-06-02 17:45:12.593455 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-06-02 17:45:12.593460 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-06-02 17:45:12.593464 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-06-02 17:45:12.593467 | orchestrator | 2025-06-02 17:45:12.593471 | orchestrator | 2025-06-02 17:45:12.593475 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:45:12.593479 | orchestrator | Monday 02 June 2025 17:45:11 +0000 (0:00:00.211) 0:03:14.987 *********** 2025-06-02 17:45:12.593482 | orchestrator | =============================================================================== 2025-06-02 17:45:12.593486 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 45.46s 2025-06-02 17:45:12.593492 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 36.74s 2025-06-02 17:45:12.593496 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 11.97s 2025-06-02 17:45:12.593500 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.98s 2025-06-02 17:45:12.593504 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.25s 2025-06-02 17:45:12.593510 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.00s 2025-06-02 17:45:12.593516 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 5.56s 2025-06-02 17:45:12.593522 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.25s 2025-06-02 17:45:12.593528 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.96s 2025-06-02 17:45:12.593534 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.90s 2025-06-02 17:45:12.593540 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.68s 2025-06-02 17:45:12.593546 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.66s 2025-06-02 17:45:12.593552 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.30s 2025-06-02 17:45:12.593557 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.03s 2025-06-02 17:45:12.593561 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.90s 2025-06-02 17:45:12.593565 | orchestrator | Check MariaDB service --------------------------------------------------- 2.84s 2025-06-02 17:45:12.593568 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.71s 2025-06-02 17:45:12.593572 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.69s 2025-06-02 17:45:12.593576 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.45s 2025-06-02 17:45:12.593580 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.40s 2025-06-02 17:45:15.640355 | orchestrator | 2025-06-02 17:45:15 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:45:15.640490 | orchestrator | 2025-06-02 17:45:15 | INFO  | Task 7f89fefa-9baf-41b0-a33a-aba43d10a4f6 is in state STARTED 2025-06-02 17:45:15.645622 | orchestrator | 2025-06-02 17:45:15 | INFO  | Task 43fe686c-359f-4025-9d70-d392ea31c5c1 is in state STARTED 2025-06-02 17:45:15.645746 | orchestrator | 2025-06-02 17:45:15 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:45:18.703124 | orchestrator | 2025-06-02 17:45:18 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:45:18.703219 | orchestrator | 2025-06-02 17:45:18 | INFO  | Task 7f89fefa-9baf-41b0-a33a-aba43d10a4f6 is in state STARTED 2025-06-02 17:45:18.704739 | orchestrator | 2025-06-02 17:45:18 | INFO  | Task 43fe686c-359f-4025-9d70-d392ea31c5c1 is in state STARTED 2025-06-02 17:45:18.704779 | orchestrator | 2025-06-02 17:45:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:45:21.747302 | orchestrator | 2025-06-02 17:45:21 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:45:21.747826 | orchestrator | 2025-06-02 17:45:21 | INFO  | Task 7f89fefa-9baf-41b0-a33a-aba43d10a4f6 is in state STARTED 2025-06-02 17:45:21.749159 | orchestrator | 2025-06-02 17:45:21 | INFO  | Task 43fe686c-359f-4025-9d70-d392ea31c5c1 is in state STARTED 2025-06-02 17:45:21.749188 | orchestrator | 2025-06-02 17:45:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:45:24.814123 | orchestrator | 2025-06-02 17:45:24 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:45:24.816904 | orchestrator | 2025-06-02 17:45:24 | INFO  | Task 7f89fefa-9baf-41b0-a33a-aba43d10a4f6 is in state STARTED 2025-06-02 17:45:24.821283 | orchestrator | 2025-06-02 17:45:24 | INFO  | Task 43fe686c-359f-4025-9d70-d392ea31c5c1 is in state STARTED 2025-06-02 17:45:24.821360 | orchestrator | 2025-06-02 17:45:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:45:27.857361 | orchestrator | 2025-06-02 17:45:27 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:45:27.857460 | orchestrator | 2025-06-02 17:45:27 | INFO  | Task 7f89fefa-9baf-41b0-a33a-aba43d10a4f6 is in state STARTED 2025-06-02 17:45:27.857473 | orchestrator | 2025-06-02 17:45:27 | INFO  | Task 43fe686c-359f-4025-9d70-d392ea31c5c1 is in state STARTED 2025-06-02 17:45:27.857484 | orchestrator | 2025-06-02 17:45:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:45:30.892767 | orchestrator | 2025-06-02 17:45:30 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:45:30.893099 | orchestrator | 2025-06-02 17:45:30 | INFO  | Task 7f89fefa-9baf-41b0-a33a-aba43d10a4f6 is in state STARTED 2025-06-02 17:45:30.894109 | orchestrator | 2025-06-02 17:45:30 | INFO  | Task 43fe686c-359f-4025-9d70-d392ea31c5c1 is in state STARTED 2025-06-02 17:45:30.927502 | orchestrator | 2025-06-02 17:45:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:45:33.931041 | orchestrator | 2025-06-02 17:45:33 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:45:33.934763 | orchestrator | 2025-06-02 17:45:33 | INFO  | Task 7f89fefa-9baf-41b0-a33a-aba43d10a4f6 is in state STARTED 2025-06-02 17:45:33.937170 | orchestrator | 2025-06-02 17:45:33 | INFO  | Task 43fe686c-359f-4025-9d70-d392ea31c5c1 is in state STARTED 2025-06-02 17:45:33.937406 | orchestrator | 2025-06-02 17:45:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:45:36.967610 | orchestrator | 2025-06-02 17:45:36 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:45:36.968518 | orchestrator | 2025-06-02 17:45:36 | INFO  | Task 7f89fefa-9baf-41b0-a33a-aba43d10a4f6 is in state STARTED 2025-06-02 17:45:36.970090 | orchestrator | 2025-06-02 17:45:36 | INFO  | Task 43fe686c-359f-4025-9d70-d392ea31c5c1 is in state STARTED 2025-06-02 17:45:36.970167 | orchestrator | 2025-06-02 17:45:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:45:39.999231 | orchestrator | 2025-06-02 17:45:39 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:45:39.999936 | orchestrator | 2025-06-02 17:45:39 | INFO  | Task 7f89fefa-9baf-41b0-a33a-aba43d10a4f6 is in state STARTED 2025-06-02 17:45:40.004368 | orchestrator | 2025-06-02 17:45:40 | INFO  | Task 43fe686c-359f-4025-9d70-d392ea31c5c1 is in state STARTED 2025-06-02 17:45:40.004436 | orchestrator | 2025-06-02 17:45:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:45:43.057356 | orchestrator | 2025-06-02 17:45:43 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:45:43.057620 | orchestrator | 2025-06-02 17:45:43 | INFO  | Task 7f89fefa-9baf-41b0-a33a-aba43d10a4f6 is in state STARTED 2025-06-02 17:45:43.057665 | orchestrator | 2025-06-02 17:45:43 | INFO  | Task 43fe686c-359f-4025-9d70-d392ea31c5c1 is in state STARTED 2025-06-02 17:45:43.057706 | orchestrator | 2025-06-02 17:45:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:45:46.119582 | orchestrator | 2025-06-02 17:45:46 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:45:46.120942 | orchestrator | 2025-06-02 17:45:46 | INFO  | Task 7f89fefa-9baf-41b0-a33a-aba43d10a4f6 is in state STARTED 2025-06-02 17:45:46.124063 | orchestrator | 2025-06-02 17:45:46 | INFO  | Task 43fe686c-359f-4025-9d70-d392ea31c5c1 is in state STARTED 2025-06-02 17:45:46.124100 | orchestrator | 2025-06-02 17:45:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:45:49.173668 | orchestrator | 2025-06-02 17:45:49 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:45:49.176553 | orchestrator | 2025-06-02 17:45:49 | INFO  | Task 7f89fefa-9baf-41b0-a33a-aba43d10a4f6 is in state STARTED 2025-06-02 17:45:49.177720 | orchestrator | 2025-06-02 17:45:49 | INFO  | Task 43fe686c-359f-4025-9d70-d392ea31c5c1 is in state STARTED 2025-06-02 17:45:49.177786 | orchestrator | 2025-06-02 17:45:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:45:52.230094 | orchestrator | 2025-06-02 17:45:52 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:45:52.232352 | orchestrator | 2025-06-02 17:45:52 | INFO  | Task 7f89fefa-9baf-41b0-a33a-aba43d10a4f6 is in state STARTED 2025-06-02 17:45:52.235955 | orchestrator | 2025-06-02 17:45:52 | INFO  | Task 43fe686c-359f-4025-9d70-d392ea31c5c1 is in state STARTED 2025-06-02 17:45:52.236205 | orchestrator | 2025-06-02 17:45:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:45:55.278444 | orchestrator | 2025-06-02 17:45:55 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:45:55.281774 | orchestrator | 2025-06-02 17:45:55 | INFO  | Task 7f89fefa-9baf-41b0-a33a-aba43d10a4f6 is in state STARTED 2025-06-02 17:45:55.281901 | orchestrator | 2025-06-02 17:45:55 | INFO  | Task 43fe686c-359f-4025-9d70-d392ea31c5c1 is in state STARTED 2025-06-02 17:45:55.281921 | orchestrator | 2025-06-02 17:45:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:45:58.337744 | orchestrator | 2025-06-02 17:45:58 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:45:58.337945 | orchestrator | 2025-06-02 17:45:58 | INFO  | Task 7f89fefa-9baf-41b0-a33a-aba43d10a4f6 is in state STARTED 2025-06-02 17:45:58.337962 | orchestrator | 2025-06-02 17:45:58 | INFO  | Task 43fe686c-359f-4025-9d70-d392ea31c5c1 is in state STARTED 2025-06-02 17:45:58.338003 | orchestrator | 2025-06-02 17:45:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:46:01.382357 | orchestrator | 2025-06-02 17:46:01 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:46:01.383687 | orchestrator | 2025-06-02 17:46:01 | INFO  | Task 7f89fefa-9baf-41b0-a33a-aba43d10a4f6 is in state STARTED 2025-06-02 17:46:01.386290 | orchestrator | 2025-06-02 17:46:01 | INFO  | Task 43fe686c-359f-4025-9d70-d392ea31c5c1 is in state STARTED 2025-06-02 17:46:01.386348 | orchestrator | 2025-06-02 17:46:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:46:04.435380 | orchestrator | 2025-06-02 17:46:04 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:46:04.439024 | orchestrator | 2025-06-02 17:46:04 | INFO  | Task 7f89fefa-9baf-41b0-a33a-aba43d10a4f6 is in state STARTED 2025-06-02 17:46:04.439096 | orchestrator | 2025-06-02 17:46:04 | INFO  | Task 43fe686c-359f-4025-9d70-d392ea31c5c1 is in state STARTED 2025-06-02 17:46:04.439105 | orchestrator | 2025-06-02 17:46:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:46:07.489135 | orchestrator | 2025-06-02 17:46:07 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:46:07.489512 | orchestrator | 2025-06-02 17:46:07 | INFO  | Task 7f89fefa-9baf-41b0-a33a-aba43d10a4f6 is in state STARTED 2025-06-02 17:46:07.490611 | orchestrator | 2025-06-02 17:46:07 | INFO  | Task 43fe686c-359f-4025-9d70-d392ea31c5c1 is in state STARTED 2025-06-02 17:46:07.491115 | orchestrator | 2025-06-02 17:46:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:46:10.546602 | orchestrator | 2025-06-02 17:46:10 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:46:10.547711 | orchestrator | 2025-06-02 17:46:10 | INFO  | Task 7f89fefa-9baf-41b0-a33a-aba43d10a4f6 is in state STARTED 2025-06-02 17:46:10.553083 | orchestrator | 2025-06-02 17:46:10.553189 | orchestrator | 2025-06-02 17:46:10.553196 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-06-02 17:46:10.553202 | orchestrator | 2025-06-02 17:46:10.553207 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-06-02 17:46:10.553224 | orchestrator | Monday 02 June 2025 17:44:02 +0000 (0:00:00.606) 0:00:00.606 *********** 2025-06-02 17:46:10.553262 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:46:10.553268 | orchestrator | 2025-06-02 17:46:10.553274 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-06-02 17:46:10.553279 | orchestrator | Monday 02 June 2025 17:44:02 +0000 (0:00:00.654) 0:00:01.261 *********** 2025-06-02 17:46:10.553283 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:46:10.553289 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:46:10.553294 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:46:10.553299 | orchestrator | 2025-06-02 17:46:10.553303 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-06-02 17:46:10.553346 | orchestrator | Monday 02 June 2025 17:44:03 +0000 (0:00:00.675) 0:00:01.937 *********** 2025-06-02 17:46:10.553351 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:46:10.553356 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:46:10.553361 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:46:10.553365 | orchestrator | 2025-06-02 17:46:10.553370 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-06-02 17:46:10.553375 | orchestrator | Monday 02 June 2025 17:44:03 +0000 (0:00:00.284) 0:00:02.221 *********** 2025-06-02 17:46:10.553379 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:46:10.553384 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:46:10.553389 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:46:10.553393 | orchestrator | 2025-06-02 17:46:10.553398 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-06-02 17:46:10.553419 | orchestrator | Monday 02 June 2025 17:44:04 +0000 (0:00:00.788) 0:00:03.009 *********** 2025-06-02 17:46:10.553424 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:46:10.553429 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:46:10.553433 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:46:10.553438 | orchestrator | 2025-06-02 17:46:10.553442 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-06-02 17:46:10.553447 | orchestrator | Monday 02 June 2025 17:44:04 +0000 (0:00:00.311) 0:00:03.321 *********** 2025-06-02 17:46:10.553451 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:46:10.553456 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:46:10.553460 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:46:10.553465 | orchestrator | 2025-06-02 17:46:10.553594 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-06-02 17:46:10.553600 | orchestrator | Monday 02 June 2025 17:44:05 +0000 (0:00:00.307) 0:00:03.628 *********** 2025-06-02 17:46:10.553604 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:46:10.553609 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:46:10.553613 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:46:10.553618 | orchestrator | 2025-06-02 17:46:10.553681 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-06-02 17:46:10.553686 | orchestrator | Monday 02 June 2025 17:44:05 +0000 (0:00:00.327) 0:00:03.956 *********** 2025-06-02 17:46:10.553691 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:46:10.553696 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:46:10.553700 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:46:10.553705 | orchestrator | 2025-06-02 17:46:10.553709 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-06-02 17:46:10.553714 | orchestrator | Monday 02 June 2025 17:44:06 +0000 (0:00:00.512) 0:00:04.469 *********** 2025-06-02 17:46:10.553718 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:46:10.553723 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:46:10.553727 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:46:10.553732 | orchestrator | 2025-06-02 17:46:10.553736 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-06-02 17:46:10.553741 | orchestrator | Monday 02 June 2025 17:44:06 +0000 (0:00:00.300) 0:00:04.770 *********** 2025-06-02 17:46:10.553745 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-02 17:46:10.553750 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 17:46:10.553755 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 17:46:10.553759 | orchestrator | 2025-06-02 17:46:10.553764 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-06-02 17:46:10.553768 | orchestrator | Monday 02 June 2025 17:44:07 +0000 (0:00:00.701) 0:00:05.472 *********** 2025-06-02 17:46:10.553773 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:46:10.553777 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:46:10.553782 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:46:10.553786 | orchestrator | 2025-06-02 17:46:10.553791 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-06-02 17:46:10.553933 | orchestrator | Monday 02 June 2025 17:44:07 +0000 (0:00:00.462) 0:00:05.934 *********** 2025-06-02 17:46:10.553941 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-02 17:46:10.553946 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 17:46:10.553950 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 17:46:10.553955 | orchestrator | 2025-06-02 17:46:10.553959 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-06-02 17:46:10.553964 | orchestrator | Monday 02 June 2025 17:44:09 +0000 (0:00:02.152) 0:00:08.087 *********** 2025-06-02 17:46:10.553976 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-02 17:46:10.553988 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-02 17:46:10.553993 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-02 17:46:10.553997 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:46:10.554002 | orchestrator | 2025-06-02 17:46:10.554038 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-06-02 17:46:10.554065 | orchestrator | Monday 02 June 2025 17:44:10 +0000 (0:00:00.407) 0:00:08.494 *********** 2025-06-02 17:46:10.554073 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-02 17:46:10.554081 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-02 17:46:10.554086 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-02 17:46:10.554090 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:46:10.554095 | orchestrator | 2025-06-02 17:46:10.554100 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-06-02 17:46:10.554104 | orchestrator | Monday 02 June 2025 17:44:10 +0000 (0:00:00.784) 0:00:09.279 *********** 2025-06-02 17:46:10.554110 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-02 17:46:10.554118 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-02 17:46:10.554123 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-02 17:46:10.554127 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:46:10.554132 | orchestrator | 2025-06-02 17:46:10.554137 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-06-02 17:46:10.554141 | orchestrator | Monday 02 June 2025 17:44:11 +0000 (0:00:00.152) 0:00:09.432 *********** 2025-06-02 17:46:10.554147 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'c0b1b9c73486', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-06-02 17:44:08.269976', 'end': '2025-06-02 17:44:08.315566', 'delta': '0:00:00.045590', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c0b1b9c73486'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-06-02 17:46:10.554186 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'd2f68413b6c8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-06-02 17:44:09.025510', 'end': '2025-06-02 17:44:09.069917', 'delta': '0:00:00.044407', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d2f68413b6c8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-06-02 17:46:10.554209 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'd97888002843', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-06-02 17:44:09.569404', 'end': '2025-06-02 17:44:09.620194', 'delta': '0:00:00.050790', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d97888002843'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-06-02 17:46:10.554215 | orchestrator | 2025-06-02 17:46:10.554220 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-06-02 17:46:10.554224 | orchestrator | Monday 02 June 2025 17:44:11 +0000 (0:00:00.385) 0:00:09.817 *********** 2025-06-02 17:46:10.554229 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:46:10.554234 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:46:10.554238 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:46:10.554243 | orchestrator | 2025-06-02 17:46:10.554247 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-06-02 17:46:10.554252 | orchestrator | Monday 02 June 2025 17:44:11 +0000 (0:00:00.450) 0:00:10.268 *********** 2025-06-02 17:46:10.554256 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-06-02 17:46:10.554261 | orchestrator | 2025-06-02 17:46:10.554266 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-06-02 17:46:10.554270 | orchestrator | Monday 02 June 2025 17:44:13 +0000 (0:00:01.840) 0:00:12.109 *********** 2025-06-02 17:46:10.554275 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:46:10.554279 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:46:10.554284 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:46:10.554289 | orchestrator | 2025-06-02 17:46:10.554293 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-06-02 17:46:10.554298 | orchestrator | Monday 02 June 2025 17:44:14 +0000 (0:00:00.300) 0:00:12.409 *********** 2025-06-02 17:46:10.554302 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:46:10.554307 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:46:10.554311 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:46:10.554316 | orchestrator | 2025-06-02 17:46:10.554320 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-02 17:46:10.554325 | orchestrator | Monday 02 June 2025 17:44:14 +0000 (0:00:00.419) 0:00:12.829 *********** 2025-06-02 17:46:10.554329 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:46:10.554334 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:46:10.554338 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:46:10.554362 | orchestrator | 2025-06-02 17:46:10.554367 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-06-02 17:46:10.554372 | orchestrator | Monday 02 June 2025 17:44:14 +0000 (0:00:00.501) 0:00:13.330 *********** 2025-06-02 17:46:10.554376 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:46:10.554381 | orchestrator | 2025-06-02 17:46:10.554385 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-06-02 17:46:10.554394 | orchestrator | Monday 02 June 2025 17:44:15 +0000 (0:00:00.146) 0:00:13.477 *********** 2025-06-02 17:46:10.554398 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:46:10.554403 | orchestrator | 2025-06-02 17:46:10.554407 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-02 17:46:10.554412 | orchestrator | Monday 02 June 2025 17:44:15 +0000 (0:00:00.230) 0:00:13.708 *********** 2025-06-02 17:46:10.554417 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:46:10.554421 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:46:10.554426 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:46:10.554430 | orchestrator | 2025-06-02 17:46:10.554435 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-06-02 17:46:10.554440 | orchestrator | Monday 02 June 2025 17:44:15 +0000 (0:00:00.280) 0:00:13.988 *********** 2025-06-02 17:46:10.554444 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:46:10.554449 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:46:10.554453 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:46:10.554458 | orchestrator | 2025-06-02 17:46:10.554462 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-06-02 17:46:10.554467 | orchestrator | Monday 02 June 2025 17:44:15 +0000 (0:00:00.305) 0:00:14.294 *********** 2025-06-02 17:46:10.554471 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:46:10.554476 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:46:10.554480 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:46:10.554485 | orchestrator | 2025-06-02 17:46:10.554489 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-06-02 17:46:10.554494 | orchestrator | Monday 02 June 2025 17:44:16 +0000 (0:00:00.524) 0:00:14.818 *********** 2025-06-02 17:46:10.554498 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:46:10.554503 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:46:10.554507 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:46:10.554512 | orchestrator | 2025-06-02 17:46:10.554516 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-06-02 17:46:10.554521 | orchestrator | Monday 02 June 2025 17:44:16 +0000 (0:00:00.324) 0:00:15.142 *********** 2025-06-02 17:46:10.554526 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:46:10.554530 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:46:10.554534 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:46:10.554539 | orchestrator | 2025-06-02 17:46:10.554544 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-06-02 17:46:10.554548 | orchestrator | Monday 02 June 2025 17:44:17 +0000 (0:00:00.355) 0:00:15.498 *********** 2025-06-02 17:46:10.554553 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:46:10.554557 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:46:10.554563 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:46:10.554568 | orchestrator | 2025-06-02 17:46:10.554574 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-06-02 17:46:10.554592 | orchestrator | Monday 02 June 2025 17:44:17 +0000 (0:00:00.295) 0:00:15.793 *********** 2025-06-02 17:46:10.554597 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:46:10.554603 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:46:10.554612 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:46:10.554617 | orchestrator | 2025-06-02 17:46:10.554623 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-06-02 17:46:10.554628 | orchestrator | Monday 02 June 2025 17:44:17 +0000 (0:00:00.513) 0:00:16.307 *********** 2025-06-02 17:46:10.554634 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--94958c5d--ab49--5ebf--a5cb--ef67fe0a9704-osd--block--94958c5d--ab49--5ebf--a5cb--ef67fe0a9704', 'dm-uuid-LVM-KMmsn0EVITsGj9TWOXyYzPFcl9Vg8RYvuZnGX1fEon7QrG8BXfWLQNyn31cle28T'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 17:46:10.554645 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--42dde184--17ae--50b7--8921--f17969f5efd9-osd--block--42dde184--17ae--50b7--8921--f17969f5efd9', 'dm-uuid-LVM-CESb8QC4Tp8nXi0PF2s5S4xvHCsfRXnP3wjEcSkbBJWdn2phWRkcvR7USA0zDhtB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 17:46:10.554651 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:46:10.554657 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:46:10.554663 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:46:10.554668 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:46:10.554674 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:46:10.554692 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:46:10.554701 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:46:10.554707 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:46:10.554719 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc2aa846-a38d-43a1-9fee-c1088582d602', 'scsi-SQEMU_QEMU_HARDDISK_dc2aa846-a38d-43a1-9fee-c1088582d602'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc2aa846-a38d-43a1-9fee-c1088582d602-part1', 'scsi-SQEMU_QEMU_HARDDISK_dc2aa846-a38d-43a1-9fee-c1088582d602-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc2aa846-a38d-43a1-9fee-c1088582d602-part14', 'scsi-SQEMU_QEMU_HARDDISK_dc2aa846-a38d-43a1-9fee-c1088582d602-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc2aa846-a38d-43a1-9fee-c1088582d602-part15', 'scsi-SQEMU_QEMU_HARDDISK_dc2aa846-a38d-43a1-9fee-c1088582d602-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc2aa846-a38d-43a1-9fee-c1088582d602-part16', 'scsi-SQEMU_QEMU_HARDDISK_dc2aa846-a38d-43a1-9fee-c1088582d602-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:46:10.554727 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--de836c00--0412--5e15--aa8a--abef9bebfb26-osd--block--de836c00--0412--5e15--aa8a--abef9bebfb26', 'dm-uuid-LVM-1VZYIg7KCwGMXSKssoRinN9zS5U8TxXk9Uvj5DuJRlLOZWdlspbHlbvb9xrYZJt2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 17:46:10.554748 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--94958c5d--ab49--5ebf--a5cb--ef67fe0a9704-osd--block--94958c5d--ab49--5ebf--a5cb--ef67fe0a9704'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-KeHdNZ-tekv-q3Jm-pKmi-C8MP-DuHa-KUx04F', 'scsi-0QEMU_QEMU_HARDDISK_f15aa92f-a864-46a7-a446-d151182076d1', 'scsi-SQEMU_QEMU_HARDDISK_f15aa92f-a864-46a7-a446-d151182076d1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:46:10.554756 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c404b240--9cf0--5c0e--97ba--c570a8ba4cd9-osd--block--c404b240--9cf0--5c0e--97ba--c570a8ba4cd9', 'dm-uuid-LVM-Yn18L1MERL5p93hCY1551alTwNNRtouMaJhiE4ZDnlFkO3T4lsYdSaRGsHed8tf2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 17:46:10.554765 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--42dde184--17ae--50b7--8921--f17969f5efd9-osd--block--42dde184--17ae--50b7--8921--f17969f5efd9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FSVcen-xfak-l0K6-V65O-0nOf-M99l-6K8YWo', 'scsi-0QEMU_QEMU_HARDDISK_abb01d95-8fd4-488e-8b6c-7cb2a7271361', 'scsi-SQEMU_QEMU_HARDDISK_abb01d95-8fd4-488e-8b6c-7cb2a7271361'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:46:10.554771 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:46:10.554777 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5d913f80-ed99-4f7f-af77-a272e71d6767', 'scsi-SQEMU_QEMU_HARDDISK_5d913f80-ed99-4f7f-af77-a272e71d6767'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:46:10.554782 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:46:10.554788 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-16-53-36-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:46:10.554844 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:46:10.554854 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:46:10.554867 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:46:10.554872 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:46:10.554876 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:46:10.554881 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:46:10.554886 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:46:10.554898 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ce206cb-87d4-44fa-8b19-ffddc5f2b300', 'scsi-SQEMU_QEMU_HARDDISK_7ce206cb-87d4-44fa-8b19-ffddc5f2b300'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ce206cb-87d4-44fa-8b19-ffddc5f2b300-part1', 'scsi-SQEMU_QEMU_HARDDISK_7ce206cb-87d4-44fa-8b19-ffddc5f2b300-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ce206cb-87d4-44fa-8b19-ffddc5f2b300-part14', 'scsi-SQEMU_QEMU_HARDDISK_7ce206cb-87d4-44fa-8b19-ffddc5f2b300-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ce206cb-87d4-44fa-8b19-ffddc5f2b300-part15', 'scsi-SQEMU_QEMU_HARDDISK_7ce206cb-87d4-44fa-8b19-ffddc5f2b300-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ce206cb-87d4-44fa-8b19-ffddc5f2b300-part16', 'scsi-SQEMU_QEMU_HARDDISK_7ce206cb-87d4-44fa-8b19-ffddc5f2b300-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:46:10.554908 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--de836c00--0412--5e15--aa8a--abef9bebfb26-osd--block--de836c00--0412--5e15--aa8a--abef9bebfb26'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PWQM6B-jy51-yHR4-Xcur-JWGt-c4rk-j5fZG9', 'scsi-0QEMU_QEMU_HARDDISK_37a5ef51-3790-4474-9294-da6668d88e33', 'scsi-SQEMU_QEMU_HARDDISK_37a5ef51-3790-4474-9294-da6668d88e33'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:46:10.554914 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--c404b240--9cf0--5c0e--97ba--c570a8ba4cd9-osd--block--c404b240--9cf0--5c0e--97ba--c570a8ba4cd9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cYdt4D-B5Wq-Mjwb-9Ydz-e3BM-44vE-VXd1px', 'scsi-0QEMU_QEMU_HARDDISK_8b34934e-11eb-4c36-8207-511a42fe0f38', 'scsi-SQEMU_QEMU_HARDDISK_8b34934e-11eb-4c36-8207-511a42fe0f38'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:46:10.554919 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d22e3547-dc50-4b67-b48e-5886da7d5148', 'scsi-SQEMU_QEMU_HARDDISK_d22e3547-dc50-4b67-b48e-5886da7d5148'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:46:10.554924 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-16-53-28-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:46:10.554928 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:46:10.554933 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--33d58ee2--4c10--58b1--ba9c--becc4d68c01c-osd--block--33d58ee2--4c10--58b1--ba9c--becc4d68c01c', 'dm-uuid-LVM-b2DUe6pPjWw4q9EUJVUjIvE3Me0qGzC9JNcGY7fMyv8yeJzKZcRP1q95YMxjL7oH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 17:46:10.554943 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a4a4ffc0--4b1a--5123--a777--2de0f9f46a6b-osd--block--a4a4ffc0--4b1a--5123--a777--2de0f9f46a6b', 'dm-uuid-LVM-oXW0HnudB9NGFV2CziApkCUlse954NVKg0dAUucQMXIjGY5IE8PcBcxv61Xaa3tO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 17:46:10.554955 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:46:10.554961 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:46:10.554965 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:46:10.554970 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:46:10.554975 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:46:10.554979 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:46:10.554984 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:46:10.554989 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:46:10.555000 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33e5408c-eac4-45cf-8284-ea43471071f8', 'scsi-SQEMU_QEMU_HARDDISK_33e5408c-eac4-45cf-8284-ea43471071f8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33e5408c-eac4-45cf-8284-ea43471071f8-part1', 'scsi-SQEMU_QEMU_HARDDISK_33e5408c-eac4-45cf-8284-ea43471071f8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33e5408c-eac4-45cf-8284-ea43471071f8-part14', 'scsi-SQEMU_QEMU_HARDDISK_33e5408c-eac4-45cf-8284-ea43471071f8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33e5408c-eac4-45cf-8284-ea43471071f8-part15', 'scsi-SQEMU_QEMU_HARDDISK_33e5408c-eac4-45cf-8284-ea43471071f8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33e5408c-eac4-45cf-8284-ea43471071f8-part16', 'scsi-SQEMU_QEMU_HARDDISK_33e5408c-eac4-45cf-8284-ea43471071f8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:46:10.555010 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--33d58ee2--4c10--58b1--ba9c--becc4d68c01c-osd--block--33d58ee2--4c10--58b1--ba9c--becc4d68c01c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gjnCCA-r0Z1-l49w-WvEU-R0jc-GTtC-9JSoTT', 'scsi-0QEMU_QEMU_HARDDISK_cc6b7f8a-a299-449d-8912-3815da19ff1f', 'scsi-SQEMU_QEMU_HARDDISK_cc6b7f8a-a299-449d-8912-3815da19ff1f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:46:10.555015 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a4a4ffc0--4b1a--5123--a777--2de0f9f46a6b-osd--block--a4a4ffc0--4b1a--5123--a777--2de0f9f46a6b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vYaIGf-yEgl-Ymyy-5uFH-5UfI-zYmZ-prR9B8', 'scsi-0QEMU_QEMU_HARDDISK_fb369b5e-a271-4fa4-9f85-1311171daecb', 'scsi-SQEMU_QEMU_HARDDISK_fb369b5e-a271-4fa4-9f85-1311171daecb'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:46:10.555020 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f5db02e-386c-41b9-ae07-b7cce6e0964a', 'scsi-SQEMU_QEMU_HARDDISK_6f5db02e-386c-41b9-ae07-b7cce6e0964a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:46:10.555031 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-16-53-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:46:10.555039 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:46:10.555043 | orchestrator | 2025-06-02 17:46:10.555048 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-06-02 17:46:10.555053 | orchestrator | Monday 02 June 2025 17:44:18 +0000 (0:00:00.603) 0:00:16.911 *********** 2025-06-02 17:46:10.555058 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--94958c5d--ab49--5ebf--a5cb--ef67fe0a9704-osd--block--94958c5d--ab49--5ebf--a5cb--ef67fe0a9704', 'dm-uuid-LVM-KMmsn0EVITsGj9TWOXyYzPFcl9Vg8RYvuZnGX1fEon7QrG8BXfWLQNyn31cle28T'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:46:10.555063 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--42dde184--17ae--50b7--8921--f17969f5efd9-osd--block--42dde184--17ae--50b7--8921--f17969f5efd9', 'dm-uuid-LVM-CESb8QC4Tp8nXi0PF2s5S4xvHCsfRXnP3wjEcSkbBJWdn2phWRkcvR7USA0zDhtB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:46:10.555068 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:46:10.555073 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:46:10.555078 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:46:10.555093 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:46:10.555098 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:46:10.555103 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:46:10.555108 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:46:10.555113 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:46:10.555117 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--de836c00--0412--5e15--aa8a--abef9bebfb26-osd--block--de836c00--0412--5e15--aa8a--abef9bebfb26', 'dm-uuid-LVM-1VZYIg7KCwGMXSKssoRinN9zS5U8TxXk9Uvj5DuJRlLOZWdlspbHlbvb9xrYZJt2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:46:10.555133 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc2aa846-a38d-43a1-9fee-c1088582d602', 'scsi-SQEMU_QEMU_HARDDISK_dc2aa846-a38d-43a1-9fee-c1088582d602'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc2aa846-a38d-43a1-9fee-c1088582d602-part1', 'scsi-SQEMU_QEMU_HARDDISK_dc2aa846-a38d-43a1-9fee-c1088582d602-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc2aa846-a38d-43a1-9fee-c1088582d602-part14', 'scsi-SQEMU_QEMU_HARDDISK_dc2aa846-a38d-43a1-9fee-c1088582d602-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc2aa846-a38d-43a1-9fee-c1088582d602-part15', 'scsi-SQEMU_QEMU_HARDDISK_dc2aa846-a38d-43a1-9fee-c1088582d602-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc2aa846-a38d-43a1-9fee-c1088582d602-part16', 'scsi-SQEMU_QEMU_HARDDISK_dc2aa846-a38d-43a1-9fee-c1088582d602-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:46:10.555138 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c404b240--9cf0--5c0e--97ba--c570a8ba4cd9-osd--block--c404b240--9cf0--5c0e--97ba--c570a8ba4cd9', 'dm-uuid-LVM-Yn18L1MERL5p93hCY1551alTwNNRtouMaJhiE4ZDnlFkO3T4lsYdSaRGsHed8tf2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:46:10.555144 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--94958c5d--ab49--5ebf--a5cb--ef67fe0a9704-osd--block--94958c5d--ab49--5ebf--a5cb--ef67fe0a9704'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-KeHdNZ-tekv-q3Jm-pKmi-C8MP-DuHa-KUx04F', 'scsi-0QEMU_QEMU_HARDDISK_f15aa92f-a864-46a7-a446-d151182076d1', 'scsi-SQEMU_QEMU_HARDDISK_f15aa92f-a864-46a7-a446-d151182076d1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:46:10.555154 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:46:10.555165 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--42dde184--17ae--50b7--8921--f17969f5efd9-osd--block--42dde184--17ae--50b7--8921--f17969f5efd9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FSVcen-xfak-l0K6-V65O-0nOf-M99l-6K8YWo', 'scsi-0QEMU_QEMU_HARDDISK_abb01d95-8fd4-488e-8b6c-7cb2a7271361', 'scsi-SQEMU_QEMU_HARDDISK_abb01d95-8fd4-488e-8b6c-7cb2a7271361'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:46:10.555170 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:46:10.555175 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5d913f80-ed99-4f7f-af77-a272e71d6767', 'scsi-SQEMU_QEMU_HARDDISK_5d913f80-ed99-4f7f-af77-a272e71d6767'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:46:10.555180 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:46:10.555190 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-16-53-36-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:46:10.555202 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:46:10.555207 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:46:10.555212 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:46:10.555216 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:46:10.555221 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:46:10.555226 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:46:10.555243 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ce206cb-87d4-44fa-8b19-ffddc5f2b300', 'scsi-SQEMU_QEMU_HARDDISK_7ce206cb-87d4-44fa-8b19-ffddc5f2b300'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ce206cb-87d4-44fa-8b19-ffddc5f2b300-part1', 'scsi-SQEMU_QEMU_HARDDISK_7ce206cb-87d4-44fa-8b19-ffddc5f2b300-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ce206cb-87d4-44fa-8b19-ffddc5f2b300-part14', 'scsi-SQEMU_QEMU_HARDDISK_7ce206cb-87d4-44fa-8b19-ffddc5f2b300-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ce206cb-87d4-44fa-8b19-ffddc5f2b300-part15', 'scsi-SQEMU_QEMU_HARDDISK_7ce206cb-87d4-44fa-8b19-ffddc5f2b300-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ce206cb-87d4-44fa-8b19-ffddc5f2b300-part16', 'scsi-SQEMU_QEMU_HARDDISK_7ce206cb-87d4-44fa-8b19-ffddc5f2b300-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:46:10.555248 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--de836c00--0412--5e15--aa8a--abef9bebfb26-osd--block--de836c00--0412--5e15--aa8a--abef9bebfb26'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PWQM6B-jy51-yHR4-Xcur-JWGt-c4rk-j5fZG9', 'scsi-0QEMU_QEMU_HARDDISK_37a5ef51-3790-4474-9294-da6668d88e33', 'scsi-SQEMU_QEMU_HARDDISK_37a5ef51-3790-4474-9294-da6668d88e33'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:46:10.555254 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--c404b240--9cf0--5c0e--97ba--c570a8ba4cd9-osd--block--c404b240--9cf0--5c0e--97ba--c570a8ba4cd9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cYdt4D-B5Wq-Mjwb-9Ydz-e3BM-44vE-VXd1px', 'scsi-0QEMU_QEMU_HARDDISK_8b34934e-11eb-4c36-8207-511a42fe0f38', 'scsi-SQEMU_QEMU_HARDDISK_8b34934e-11eb-4c36-8207-511a42fe0f38'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:46:10.555267 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--33d58ee2--4c10--58b1--ba9c--becc4d68c01c-osd--block--33d58ee2--4c10--58b1--ba9c--becc4d68c01c', 'dm-uuid-LVM-b2DUe6pPjWw4q9EUJVUjIvE3Me0qGzC9JNcGY7fMyv8yeJzKZcRP1q95YMxjL7oH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device2025-06-02 17:46:10 | INFO  | Task 43fe686c-359f-4025-9d70-d392ea31c5c1 is in state SUCCESS 2025-06-02 17:46:10.555274 | orchestrator | 2025-06-02 17:46:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:46:10.555283 | orchestrator | _handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:46:10.555301 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d22e3547-dc50-4b67-b48e-5886da7d5148', 'scsi-SQEMU_QEMU_HARDDISK_d22e3547-dc50-4b67-b48e-5886da7d5148'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:46:10.555309 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a4a4ffc0--4b1a--5123--a777--2de0f9f46a6b-osd--block--a4a4ffc0--4b1a--5123--a777--2de0f9f46a6b', 'dm-uuid-LVM-oXW0HnudB9NGFV2CziApkCUlse954NVKg0dAUucQMXIjGY5IE8PcBcxv61Xaa3tO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:46:10.555317 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-16-53-28-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:46:10.555324 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:46:10.555336 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:46:10.555344 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:46:10.555351 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:46:10.555365 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:46:10.555373 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:46:10.555381 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:46:10.555388 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:46:10.555401 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:46:10.555418 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33e5408c-eac4-45cf-8284-ea43471071f8', 'scsi-SQEMU_QEMU_HARDDISK_33e5408c-eac4-45cf-8284-ea43471071f8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33e5408c-eac4-45cf-8284-ea43471071f8-part1', 'scsi-SQEMU_QEMU_HARDDISK_33e5408c-eac4-45cf-8284-ea43471071f8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33e5408c-eac4-45cf-8284-ea43471071f8-part14', 'scsi-SQEMU_QEMU_HARDDISK_33e5408c-eac4-45cf-8284-ea43471071f8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33e5408c-eac4-45cf-8284-ea43471071f8-part15', 'scsi-SQEMU_QEMU_HARDDISK_33e5408c-eac4-45cf-8284-ea43471071f8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33e5408c-eac4-45cf-8284-ea43471071f8-part16', 'scsi-SQEMU_QEMU_HARDDISK_33e5408c-eac4-45cf-8284-ea43471071f8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:46:10.555427 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--33d58ee2--4c10--58b1--ba9c--becc4d68c01c-osd--block--33d58ee2--4c10--58b1--ba9c--becc4d68c01c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gjnCCA-r0Z1-l49w-WvEU-R0jc-GTtC-9JSoTT', 'scsi-0QEMU_QEMU_HARDDISK_cc6b7f8a-a299-449d-8912-3815da19ff1f', 'scsi-SQEMU_QEMU_HARDDISK_cc6b7f8a-a299-449d-8912-3815da19ff1f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:46:10.555439 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--a4a4ffc0--4b1a--5123--a777--2de0f9f46a6b-osd--block--a4a4ffc0--4b1a--5123--a777--2de0f9f46a6b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vYaIGf-yEgl-Ymyy-5uFH-5UfI-zYmZ-prR9B8', 'scsi-0QEMU_QEMU_HARDDISK_fb369b5e-a271-4fa4-9f85-1311171daecb', 'scsi-SQEMU_QEMU_HARDDISK_fb369b5e-a271-4fa4-9f85-1311171daecb'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:46:10.555447 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f5db02e-386c-41b9-ae07-b7cce6e0964a', 'scsi-SQEMU_QEMU_HARDDISK_6f5db02e-386c-41b9-ae07-b7cce6e0964a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:46:10.555462 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-16-53-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:46:10.555467 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:46:10.555472 | orchestrator | 2025-06-02 17:46:10.555476 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-06-02 17:46:10.555481 | orchestrator | Monday 02 June 2025 17:44:19 +0000 (0:00:00.661) 0:00:17.572 *********** 2025-06-02 17:46:10.555486 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:46:10.555491 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:46:10.555495 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:46:10.555500 | orchestrator | 2025-06-02 17:46:10.555505 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-06-02 17:46:10.555509 | orchestrator | Monday 02 June 2025 17:44:19 +0000 (0:00:00.717) 0:00:18.289 *********** 2025-06-02 17:46:10.555514 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:46:10.555518 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:46:10.555523 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:46:10.555527 | orchestrator | 2025-06-02 17:46:10.555532 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-02 17:46:10.555536 | orchestrator | Monday 02 June 2025 17:44:20 +0000 (0:00:00.527) 0:00:18.817 *********** 2025-06-02 17:46:10.555541 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:46:10.555545 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:46:10.555550 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:46:10.555554 | orchestrator | 2025-06-02 17:46:10.555559 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-02 17:46:10.555567 | orchestrator | Monday 02 June 2025 17:44:21 +0000 (0:00:00.643) 0:00:19.460 *********** 2025-06-02 17:46:10.555572 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:46:10.555577 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:46:10.555581 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:46:10.555586 | orchestrator | 2025-06-02 17:46:10.555590 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-02 17:46:10.555595 | orchestrator | Monday 02 June 2025 17:44:21 +0000 (0:00:00.315) 0:00:19.776 *********** 2025-06-02 17:46:10.555599 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:46:10.555604 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:46:10.555609 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:46:10.555614 | orchestrator | 2025-06-02 17:46:10.555618 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-02 17:46:10.555623 | orchestrator | Monday 02 June 2025 17:44:21 +0000 (0:00:00.468) 0:00:20.244 *********** 2025-06-02 17:46:10.555627 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:46:10.555632 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:46:10.555636 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:46:10.555641 | orchestrator | 2025-06-02 17:46:10.555645 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-06-02 17:46:10.555652 | orchestrator | Monday 02 June 2025 17:44:22 +0000 (0:00:00.570) 0:00:20.815 *********** 2025-06-02 17:46:10.555660 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-06-02 17:46:10.555670 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-06-02 17:46:10.555680 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-06-02 17:46:10.555687 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-06-02 17:46:10.555694 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-06-02 17:46:10.555701 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-06-02 17:46:10.555708 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-06-02 17:46:10.555714 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-06-02 17:46:10.555721 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-06-02 17:46:10.555729 | orchestrator | 2025-06-02 17:46:10.555737 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-06-02 17:46:10.555744 | orchestrator | Monday 02 June 2025 17:44:23 +0000 (0:00:00.873) 0:00:21.688 *********** 2025-06-02 17:46:10.555751 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-02 17:46:10.555760 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-02 17:46:10.555766 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-02 17:46:10.555770 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:46:10.555775 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-02 17:46:10.555779 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-02 17:46:10.555784 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-02 17:46:10.555788 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:46:10.555793 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-02 17:46:10.555797 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-02 17:46:10.555826 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-02 17:46:10.555832 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:46:10.555836 | orchestrator | 2025-06-02 17:46:10.555841 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-06-02 17:46:10.555845 | orchestrator | Monday 02 June 2025 17:44:23 +0000 (0:00:00.364) 0:00:22.052 *********** 2025-06-02 17:46:10.555850 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:46:10.555855 | orchestrator | 2025-06-02 17:46:10.555864 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-02 17:46:10.555887 | orchestrator | Monday 02 June 2025 17:44:24 +0000 (0:00:00.715) 0:00:22.768 *********** 2025-06-02 17:46:10.555892 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:46:10.555900 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:46:10.555905 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:46:10.555909 | orchestrator | 2025-06-02 17:46:10.555913 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-02 17:46:10.555918 | orchestrator | Monday 02 June 2025 17:44:24 +0000 (0:00:00.322) 0:00:23.090 *********** 2025-06-02 17:46:10.555923 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:46:10.555927 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:46:10.555931 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:46:10.555936 | orchestrator | 2025-06-02 17:46:10.555941 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-02 17:46:10.555945 | orchestrator | Monday 02 June 2025 17:44:25 +0000 (0:00:00.302) 0:00:23.393 *********** 2025-06-02 17:46:10.555949 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:46:10.555954 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:46:10.555958 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:46:10.555963 | orchestrator | 2025-06-02 17:46:10.555967 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-02 17:46:10.555972 | orchestrator | Monday 02 June 2025 17:44:25 +0000 (0:00:00.327) 0:00:23.721 *********** 2025-06-02 17:46:10.555976 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:46:10.555981 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:46:10.555985 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:46:10.555990 | orchestrator | 2025-06-02 17:46:10.555995 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-02 17:46:10.555999 | orchestrator | Monday 02 June 2025 17:44:26 +0000 (0:00:00.634) 0:00:24.355 *********** 2025-06-02 17:46:10.556003 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 17:46:10.556008 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 17:46:10.556012 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 17:46:10.556017 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:46:10.556021 | orchestrator | 2025-06-02 17:46:10.556026 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-02 17:46:10.556030 | orchestrator | Monday 02 June 2025 17:44:26 +0000 (0:00:00.444) 0:00:24.800 *********** 2025-06-02 17:46:10.556035 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 17:46:10.556039 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 17:46:10.556044 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 17:46:10.556049 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:46:10.556053 | orchestrator | 2025-06-02 17:46:10.556057 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-02 17:46:10.556062 | orchestrator | Monday 02 June 2025 17:44:26 +0000 (0:00:00.362) 0:00:25.162 *********** 2025-06-02 17:46:10.556066 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 17:46:10.556071 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 17:46:10.556075 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 17:46:10.556080 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:46:10.556084 | orchestrator | 2025-06-02 17:46:10.556089 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-02 17:46:10.556093 | orchestrator | Monday 02 June 2025 17:44:27 +0000 (0:00:00.358) 0:00:25.521 *********** 2025-06-02 17:46:10.556098 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:46:10.556102 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:46:10.556107 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:46:10.556111 | orchestrator | 2025-06-02 17:46:10.556116 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-02 17:46:10.556120 | orchestrator | Monday 02 June 2025 17:44:27 +0000 (0:00:00.339) 0:00:25.861 *********** 2025-06-02 17:46:10.556131 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-02 17:46:10.556135 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-02 17:46:10.556140 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-02 17:46:10.556144 | orchestrator | 2025-06-02 17:46:10.556149 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-06-02 17:46:10.556153 | orchestrator | Monday 02 June 2025 17:44:28 +0000 (0:00:00.517) 0:00:26.378 *********** 2025-06-02 17:46:10.556158 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-02 17:46:10.556162 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 17:46:10.556167 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 17:46:10.556171 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-06-02 17:46:10.556176 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-02 17:46:10.556180 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-02 17:46:10.556185 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-02 17:46:10.556189 | orchestrator | 2025-06-02 17:46:10.556194 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-06-02 17:46:10.556199 | orchestrator | Monday 02 June 2025 17:44:29 +0000 (0:00:01.016) 0:00:27.395 *********** 2025-06-02 17:46:10.556203 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-02 17:46:10.556208 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 17:46:10.556212 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 17:46:10.556216 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-06-02 17:46:10.556221 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-02 17:46:10.556228 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-02 17:46:10.556236 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-02 17:46:10.556241 | orchestrator | 2025-06-02 17:46:10.556245 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-06-02 17:46:10.556250 | orchestrator | Monday 02 June 2025 17:44:31 +0000 (0:00:02.012) 0:00:29.407 *********** 2025-06-02 17:46:10.556254 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:46:10.556259 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:46:10.556264 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-06-02 17:46:10.556268 | orchestrator | 2025-06-02 17:46:10.556273 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-06-02 17:46:10.556278 | orchestrator | Monday 02 June 2025 17:44:31 +0000 (0:00:00.389) 0:00:29.797 *********** 2025-06-02 17:46:10.556284 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-02 17:46:10.556290 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-02 17:46:10.556295 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-02 17:46:10.556304 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-02 17:46:10.556309 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-02 17:46:10.556313 | orchestrator | 2025-06-02 17:46:10.556318 | orchestrator | TASK [generate keys] *********************************************************** 2025-06-02 17:46:10.556322 | orchestrator | Monday 02 June 2025 17:45:15 +0000 (0:00:43.823) 0:01:13.621 *********** 2025-06-02 17:46:10.556327 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:46:10.556331 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:46:10.556336 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:46:10.556340 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:46:10.556345 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:46:10.556349 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:46:10.556354 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-06-02 17:46:10.556358 | orchestrator | 2025-06-02 17:46:10.556363 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-06-02 17:46:10.556367 | orchestrator | Monday 02 June 2025 17:45:39 +0000 (0:00:23.711) 0:01:37.332 *********** 2025-06-02 17:46:10.556372 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:46:10.556376 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:46:10.556381 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:46:10.556385 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:46:10.556390 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:46:10.556394 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:46:10.556399 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-02 17:46:10.556403 | orchestrator | 2025-06-02 17:46:10.556408 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-06-02 17:46:10.556412 | orchestrator | Monday 02 June 2025 17:45:51 +0000 (0:00:12.380) 0:01:49.713 *********** 2025-06-02 17:46:10.556417 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:46:10.556424 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 17:46:10.556435 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 17:46:10.556449 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:46:10.556456 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 17:46:10.556469 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 17:46:10.556476 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:46:10.556482 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 17:46:10.556489 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 17:46:10.556495 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:46:10.556508 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 17:46:10.556514 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 17:46:10.556522 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:46:10.556529 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 17:46:10.556536 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 17:46:10.556544 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:46:10.556552 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 17:46:10.556559 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 17:46:10.556567 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-06-02 17:46:10.556576 | orchestrator | 2025-06-02 17:46:10.556581 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:46:10.556585 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-06-02 17:46:10.556592 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-06-02 17:46:10.556597 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-06-02 17:46:10.556601 | orchestrator | 2025-06-02 17:46:10.556606 | orchestrator | 2025-06-02 17:46:10.556610 | orchestrator | 2025-06-02 17:46:10.556615 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:46:10.556620 | orchestrator | Monday 02 June 2025 17:46:09 +0000 (0:00:17.874) 0:02:07.587 *********** 2025-06-02 17:46:10.556625 | orchestrator | =============================================================================== 2025-06-02 17:46:10.556633 | orchestrator | create openstack pool(s) ----------------------------------------------- 43.82s 2025-06-02 17:46:10.556640 | orchestrator | generate keys ---------------------------------------------------------- 23.71s 2025-06-02 17:46:10.556648 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.87s 2025-06-02 17:46:10.556655 | orchestrator | get keys from monitors ------------------------------------------------- 12.38s 2025-06-02 17:46:10.556662 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.15s 2025-06-02 17:46:10.556669 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.01s 2025-06-02 17:46:10.556676 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.84s 2025-06-02 17:46:10.556684 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.02s 2025-06-02 17:46:10.556691 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.87s 2025-06-02 17:46:10.556699 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.79s 2025-06-02 17:46:10.556706 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.79s 2025-06-02 17:46:10.556714 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.72s 2025-06-02 17:46:10.556721 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.72s 2025-06-02 17:46:10.556728 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.70s 2025-06-02 17:46:10.556736 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.68s 2025-06-02 17:46:10.556744 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.66s 2025-06-02 17:46:10.556748 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.65s 2025-06-02 17:46:10.556753 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.64s 2025-06-02 17:46:10.556757 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.63s 2025-06-02 17:46:10.556767 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.60s 2025-06-02 17:46:13.603907 | orchestrator | 2025-06-02 17:46:13 | INFO  | Task f674144c-c4da-4f92-ae4c-f8cebf978d6c is in state STARTED 2025-06-02 17:46:13.605121 | orchestrator | 2025-06-02 17:46:13 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:46:13.606482 | orchestrator | 2025-06-02 17:46:13 | INFO  | Task 7f89fefa-9baf-41b0-a33a-aba43d10a4f6 is in state STARTED 2025-06-02 17:46:13.606533 | orchestrator | 2025-06-02 17:46:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:46:16.650869 | orchestrator | 2025-06-02 17:46:16 | INFO  | Task f674144c-c4da-4f92-ae4c-f8cebf978d6c is in state STARTED 2025-06-02 17:46:16.652215 | orchestrator | 2025-06-02 17:46:16 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:46:16.654412 | orchestrator | 2025-06-02 17:46:16 | INFO  | Task 7f89fefa-9baf-41b0-a33a-aba43d10a4f6 is in state STARTED 2025-06-02 17:46:16.654497 | orchestrator | 2025-06-02 17:46:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:46:19.702621 | orchestrator | 2025-06-02 17:46:19 | INFO  | Task f674144c-c4da-4f92-ae4c-f8cebf978d6c is in state STARTED 2025-06-02 17:46:19.707661 | orchestrator | 2025-06-02 17:46:19 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:46:19.710980 | orchestrator | 2025-06-02 17:46:19 | INFO  | Task 7f89fefa-9baf-41b0-a33a-aba43d10a4f6 is in state STARTED 2025-06-02 17:46:19.711071 | orchestrator | 2025-06-02 17:46:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:46:22.762398 | orchestrator | 2025-06-02 17:46:22 | INFO  | Task f674144c-c4da-4f92-ae4c-f8cebf978d6c is in state STARTED 2025-06-02 17:46:22.763893 | orchestrator | 2025-06-02 17:46:22 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:46:22.768129 | orchestrator | 2025-06-02 17:46:22 | INFO  | Task 7f89fefa-9baf-41b0-a33a-aba43d10a4f6 is in state STARTED 2025-06-02 17:46:22.768272 | orchestrator | 2025-06-02 17:46:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:46:25.827284 | orchestrator | 2025-06-02 17:46:25 | INFO  | Task f674144c-c4da-4f92-ae4c-f8cebf978d6c is in state STARTED 2025-06-02 17:46:25.830555 | orchestrator | 2025-06-02 17:46:25 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:46:25.840721 | orchestrator | 2025-06-02 17:46:25 | INFO  | Task 7f89fefa-9baf-41b0-a33a-aba43d10a4f6 is in state STARTED 2025-06-02 17:46:25.841170 | orchestrator | 2025-06-02 17:46:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:46:28.894454 | orchestrator | 2025-06-02 17:46:28 | INFO  | Task f674144c-c4da-4f92-ae4c-f8cebf978d6c is in state STARTED 2025-06-02 17:46:28.894566 | orchestrator | 2025-06-02 17:46:28 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:46:28.895536 | orchestrator | 2025-06-02 17:46:28 | INFO  | Task 7f89fefa-9baf-41b0-a33a-aba43d10a4f6 is in state STARTED 2025-06-02 17:46:28.895578 | orchestrator | 2025-06-02 17:46:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:46:31.946110 | orchestrator | 2025-06-02 17:46:31 | INFO  | Task f674144c-c4da-4f92-ae4c-f8cebf978d6c is in state STARTED 2025-06-02 17:46:31.946227 | orchestrator | 2025-06-02 17:46:31 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:46:31.946235 | orchestrator | 2025-06-02 17:46:31 | INFO  | Task 7f89fefa-9baf-41b0-a33a-aba43d10a4f6 is in state STARTED 2025-06-02 17:46:31.946242 | orchestrator | 2025-06-02 17:46:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:46:34.980589 | orchestrator | 2025-06-02 17:46:34 | INFO  | Task f674144c-c4da-4f92-ae4c-f8cebf978d6c is in state STARTED 2025-06-02 17:46:34.983415 | orchestrator | 2025-06-02 17:46:34 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:46:34.986061 | orchestrator | 2025-06-02 17:46:34 | INFO  | Task 7f89fefa-9baf-41b0-a33a-aba43d10a4f6 is in state STARTED 2025-06-02 17:46:34.986107 | orchestrator | 2025-06-02 17:46:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:46:38.036235 | orchestrator | 2025-06-02 17:46:38 | INFO  | Task f674144c-c4da-4f92-ae4c-f8cebf978d6c is in state STARTED 2025-06-02 17:46:38.039084 | orchestrator | 2025-06-02 17:46:38 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:46:38.039967 | orchestrator | 2025-06-02 17:46:38 | INFO  | Task 7f89fefa-9baf-41b0-a33a-aba43d10a4f6 is in state STARTED 2025-06-02 17:46:38.040005 | orchestrator | 2025-06-02 17:46:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:46:41.092411 | orchestrator | 2025-06-02 17:46:41 | INFO  | Task f674144c-c4da-4f92-ae4c-f8cebf978d6c is in state SUCCESS 2025-06-02 17:46:41.094166 | orchestrator | 2025-06-02 17:46:41 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:46:41.096137 | orchestrator | 2025-06-02 17:46:41 | INFO  | Task 7f89fefa-9baf-41b0-a33a-aba43d10a4f6 is in state STARTED 2025-06-02 17:46:41.098182 | orchestrator | 2025-06-02 17:46:41 | INFO  | Task 2ea2dc18-9881-4e0f-a7fe-3b2c1f963f4e is in state STARTED 2025-06-02 17:46:41.098653 | orchestrator | 2025-06-02 17:46:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:46:44.160660 | orchestrator | 2025-06-02 17:46:44 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:46:44.163830 | orchestrator | 2025-06-02 17:46:44 | INFO  | Task 7f89fefa-9baf-41b0-a33a-aba43d10a4f6 is in state STARTED 2025-06-02 17:46:44.165578 | orchestrator | 2025-06-02 17:46:44 | INFO  | Task 2ea2dc18-9881-4e0f-a7fe-3b2c1f963f4e is in state STARTED 2025-06-02 17:46:44.165641 | orchestrator | 2025-06-02 17:46:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:46:47.209404 | orchestrator | 2025-06-02 17:46:47 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:46:47.211387 | orchestrator | 2025-06-02 17:46:47 | INFO  | Task 7f89fefa-9baf-41b0-a33a-aba43d10a4f6 is in state STARTED 2025-06-02 17:46:47.213610 | orchestrator | 2025-06-02 17:46:47 | INFO  | Task 2ea2dc18-9881-4e0f-a7fe-3b2c1f963f4e is in state STARTED 2025-06-02 17:46:47.213640 | orchestrator | 2025-06-02 17:46:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:46:50.257239 | orchestrator | 2025-06-02 17:46:50 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:46:50.259245 | orchestrator | 2025-06-02 17:46:50 | INFO  | Task 7f89fefa-9baf-41b0-a33a-aba43d10a4f6 is in state STARTED 2025-06-02 17:46:50.262465 | orchestrator | 2025-06-02 17:46:50 | INFO  | Task 2ea2dc18-9881-4e0f-a7fe-3b2c1f963f4e is in state STARTED 2025-06-02 17:46:50.262545 | orchestrator | 2025-06-02 17:46:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:46:53.303219 | orchestrator | 2025-06-02 17:46:53 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:46:53.306374 | orchestrator | 2025-06-02 17:46:53 | INFO  | Task 7f89fefa-9baf-41b0-a33a-aba43d10a4f6 is in state STARTED 2025-06-02 17:46:53.308493 | orchestrator | 2025-06-02 17:46:53 | INFO  | Task 2ea2dc18-9881-4e0f-a7fe-3b2c1f963f4e is in state STARTED 2025-06-02 17:46:53.308570 | orchestrator | 2025-06-02 17:46:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:46:56.365292 | orchestrator | 2025-06-02 17:46:56 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:46:56.366537 | orchestrator | 2025-06-02 17:46:56 | INFO  | Task 7f89fefa-9baf-41b0-a33a-aba43d10a4f6 is in state STARTED 2025-06-02 17:46:56.369054 | orchestrator | 2025-06-02 17:46:56 | INFO  | Task 2ea2dc18-9881-4e0f-a7fe-3b2c1f963f4e is in state STARTED 2025-06-02 17:46:56.369140 | orchestrator | 2025-06-02 17:46:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:46:59.415134 | orchestrator | 2025-06-02 17:46:59 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:46:59.415379 | orchestrator | 2025-06-02 17:46:59 | INFO  | Task 7f89fefa-9baf-41b0-a33a-aba43d10a4f6 is in state STARTED 2025-06-02 17:46:59.416154 | orchestrator | 2025-06-02 17:46:59 | INFO  | Task 2ea2dc18-9881-4e0f-a7fe-3b2c1f963f4e is in state STARTED 2025-06-02 17:46:59.416221 | orchestrator | 2025-06-02 17:46:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:47:02.459817 | orchestrator | 2025-06-02 17:47:02 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:47:02.462422 | orchestrator | 2025-06-02 17:47:02 | INFO  | Task 7f89fefa-9baf-41b0-a33a-aba43d10a4f6 is in state STARTED 2025-06-02 17:47:02.465517 | orchestrator | 2025-06-02 17:47:02 | INFO  | Task 2ea2dc18-9881-4e0f-a7fe-3b2c1f963f4e is in state STARTED 2025-06-02 17:47:02.465609 | orchestrator | 2025-06-02 17:47:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:47:05.500183 | orchestrator | 2025-06-02 17:47:05 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:47:05.502674 | orchestrator | 2025-06-02 17:47:05 | INFO  | Task 7f89fefa-9baf-41b0-a33a-aba43d10a4f6 is in state STARTED 2025-06-02 17:47:05.506344 | orchestrator | 2025-06-02 17:47:05 | INFO  | Task 2ea2dc18-9881-4e0f-a7fe-3b2c1f963f4e is in state STARTED 2025-06-02 17:47:05.506671 | orchestrator | 2025-06-02 17:47:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:47:08.550174 | orchestrator | 2025-06-02 17:47:08 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:47:08.552180 | orchestrator | 2025-06-02 17:47:08 | INFO  | Task 7f89fefa-9baf-41b0-a33a-aba43d10a4f6 is in state STARTED 2025-06-02 17:47:08.553576 | orchestrator | 2025-06-02 17:47:08 | INFO  | Task 2ea2dc18-9881-4e0f-a7fe-3b2c1f963f4e is in state STARTED 2025-06-02 17:47:08.553784 | orchestrator | 2025-06-02 17:47:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:47:11.616978 | orchestrator | 2025-06-02 17:47:11 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:47:11.618907 | orchestrator | 2025-06-02 17:47:11 | INFO  | Task 7f89fefa-9baf-41b0-a33a-aba43d10a4f6 is in state SUCCESS 2025-06-02 17:47:11.620586 | orchestrator | 2025-06-02 17:47:11.620630 | orchestrator | 2025-06-02 17:47:11.620641 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-06-02 17:47:11.620655 | orchestrator | 2025-06-02 17:47:11.620671 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-06-02 17:47:11.620682 | orchestrator | Monday 02 June 2025 17:46:14 +0000 (0:00:00.163) 0:00:00.163 *********** 2025-06-02 17:47:11.620691 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-06-02 17:47:11.620703 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-02 17:47:11.620741 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-02 17:47:11.620777 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-06-02 17:47:11.620788 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-02 17:47:11.620797 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-06-02 17:47:11.620807 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-06-02 17:47:11.620817 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-06-02 17:47:11.620826 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-06-02 17:47:11.620836 | orchestrator | 2025-06-02 17:47:11.620845 | orchestrator | TASK [Create share directory] ************************************************** 2025-06-02 17:47:11.620855 | orchestrator | Monday 02 June 2025 17:46:18 +0000 (0:00:04.044) 0:00:04.208 *********** 2025-06-02 17:47:11.620865 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-02 17:47:11.620875 | orchestrator | 2025-06-02 17:47:11.620885 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-06-02 17:47:11.620895 | orchestrator | Monday 02 June 2025 17:46:19 +0000 (0:00:00.989) 0:00:05.198 *********** 2025-06-02 17:47:11.620904 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-06-02 17:47:11.620914 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-02 17:47:11.620924 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-02 17:47:11.620933 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-06-02 17:47:11.621003 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-02 17:47:11.621014 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-06-02 17:47:11.621024 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-06-02 17:47:11.621034 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-06-02 17:47:11.621043 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-06-02 17:47:11.621053 | orchestrator | 2025-06-02 17:47:11.621063 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-06-02 17:47:11.621073 | orchestrator | Monday 02 June 2025 17:46:32 +0000 (0:00:13.437) 0:00:18.636 *********** 2025-06-02 17:47:11.621083 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-06-02 17:47:11.621093 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-02 17:47:11.621102 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-02 17:47:11.621116 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-06-02 17:47:11.621475 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-02 17:47:11.621486 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-06-02 17:47:11.621496 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-06-02 17:47:11.621506 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-06-02 17:47:11.621520 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-06-02 17:47:11.621537 | orchestrator | 2025-06-02 17:47:11.621552 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:47:11.621567 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:47:11.621584 | orchestrator | 2025-06-02 17:47:11.621600 | orchestrator | 2025-06-02 17:47:11.621618 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:47:11.621650 | orchestrator | Monday 02 June 2025 17:46:39 +0000 (0:00:06.981) 0:00:25.617 *********** 2025-06-02 17:47:11.621660 | orchestrator | =============================================================================== 2025-06-02 17:47:11.621670 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.44s 2025-06-02 17:47:11.621680 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.98s 2025-06-02 17:47:11.621701 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.04s 2025-06-02 17:47:11.621749 | orchestrator | Create share directory -------------------------------------------------- 0.99s 2025-06-02 17:47:11.621759 | orchestrator | 2025-06-02 17:47:11.621770 | orchestrator | 2025-06-02 17:47:11.621779 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 17:47:11.621789 | orchestrator | 2025-06-02 17:47:11.621813 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 17:47:11.621823 | orchestrator | Monday 02 June 2025 17:45:16 +0000 (0:00:00.279) 0:00:00.279 *********** 2025-06-02 17:47:11.621833 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:47:11.621842 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:47:11.621852 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:47:11.621861 | orchestrator | 2025-06-02 17:47:11.621871 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 17:47:11.621881 | orchestrator | Monday 02 June 2025 17:45:16 +0000 (0:00:00.321) 0:00:00.600 *********** 2025-06-02 17:47:11.621890 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-06-02 17:47:11.621900 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-06-02 17:47:11.621910 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-06-02 17:47:11.621919 | orchestrator | 2025-06-02 17:47:11.621929 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-06-02 17:47:11.621938 | orchestrator | 2025-06-02 17:47:11.621948 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-02 17:47:11.621958 | orchestrator | Monday 02 June 2025 17:45:17 +0000 (0:00:00.437) 0:00:01.038 *********** 2025-06-02 17:47:11.621968 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:47:11.621978 | orchestrator | 2025-06-02 17:47:11.621987 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-06-02 17:47:11.621997 | orchestrator | Monday 02 June 2025 17:45:17 +0000 (0:00:00.544) 0:00:01.583 *********** 2025-06-02 17:47:11.622012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 17:47:11.622104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 17:47:11.622120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 17:47:11.622138 | orchestrator | 2025-06-02 17:47:11.622149 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-06-02 17:47:11.622160 | orchestrator | Monday 02 June 2025 17:45:18 +0000 (0:00:01.216) 0:00:02.799 *********** 2025-06-02 17:47:11.622171 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:47:11.622182 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:47:11.622194 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:47:11.622205 | orchestrator | 2025-06-02 17:47:11.622217 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-02 17:47:11.622233 | orchestrator | Monday 02 June 2025 17:45:19 +0000 (0:00:00.493) 0:00:03.293 *********** 2025-06-02 17:47:11.622244 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-02 17:47:11.622261 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-02 17:47:11.622273 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-06-02 17:47:11.622284 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-06-02 17:47:11.622295 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-06-02 17:47:11.622306 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-06-02 17:47:11.622318 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-06-02 17:47:11.622329 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-06-02 17:47:11.622340 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-02 17:47:11.622351 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-02 17:47:11.622363 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-06-02 17:47:11.622374 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-06-02 17:47:11.622385 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-06-02 17:47:11.622396 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-06-02 17:47:11.622408 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-06-02 17:47:11.622419 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-06-02 17:47:11.622430 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-02 17:47:11.622440 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-02 17:47:11.622449 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-06-02 17:47:11.622459 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-06-02 17:47:11.622469 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-06-02 17:47:11.622484 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-06-02 17:47:11.622497 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-06-02 17:47:11.622517 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-06-02 17:47:11.622543 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-06-02 17:47:11.622561 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-06-02 17:47:11.622578 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-06-02 17:47:11.622594 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-06-02 17:47:11.622610 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-06-02 17:47:11.622627 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-06-02 17:47:11.622645 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-06-02 17:47:11.622663 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-06-02 17:47:11.622680 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-06-02 17:47:11.622695 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-06-02 17:47:11.622745 | orchestrator | 2025-06-02 17:47:11.622757 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 17:47:11.622767 | orchestrator | Monday 02 June 2025 17:45:20 +0000 (0:00:00.885) 0:00:04.178 *********** 2025-06-02 17:47:11.622777 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:47:11.622786 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:47:11.622796 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:47:11.622805 | orchestrator | 2025-06-02 17:47:11.622822 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 17:47:11.622832 | orchestrator | Monday 02 June 2025 17:45:20 +0000 (0:00:00.323) 0:00:04.502 *********** 2025-06-02 17:47:11.622841 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:47:11.622851 | orchestrator | 2025-06-02 17:47:11.622869 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 17:47:11.622879 | orchestrator | Monday 02 June 2025 17:45:20 +0000 (0:00:00.135) 0:00:04.637 *********** 2025-06-02 17:47:11.622888 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:47:11.622898 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:47:11.622907 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:47:11.622917 | orchestrator | 2025-06-02 17:47:11.622927 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 17:47:11.622936 | orchestrator | Monday 02 June 2025 17:45:21 +0000 (0:00:00.486) 0:00:05.124 *********** 2025-06-02 17:47:11.622946 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:47:11.622955 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:47:11.622965 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:47:11.622974 | orchestrator | 2025-06-02 17:47:11.622984 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 17:47:11.623002 | orchestrator | Monday 02 June 2025 17:45:21 +0000 (0:00:00.310) 0:00:05.434 *********** 2025-06-02 17:47:11.623012 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:47:11.623022 | orchestrator | 2025-06-02 17:47:11.623037 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 17:47:11.623054 | orchestrator | Monday 02 June 2025 17:45:21 +0000 (0:00:00.131) 0:00:05.565 *********** 2025-06-02 17:47:11.623070 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:47:11.623086 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:47:11.623102 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:47:11.623120 | orchestrator | 2025-06-02 17:47:11.623138 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 17:47:11.623155 | orchestrator | Monday 02 June 2025 17:45:22 +0000 (0:00:00.274) 0:00:05.839 *********** 2025-06-02 17:47:11.623170 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:47:11.623180 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:47:11.623191 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:47:11.623207 | orchestrator | 2025-06-02 17:47:11.623222 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 17:47:11.623237 | orchestrator | Monday 02 June 2025 17:45:22 +0000 (0:00:00.297) 0:00:06.137 *********** 2025-06-02 17:47:11.623252 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:47:11.623267 | orchestrator | 2025-06-02 17:47:11.623283 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 17:47:11.623300 | orchestrator | Monday 02 June 2025 17:45:22 +0000 (0:00:00.335) 0:00:06.473 *********** 2025-06-02 17:47:11.623316 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:47:11.623333 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:47:11.623349 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:47:11.623365 | orchestrator | 2025-06-02 17:47:11.623382 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 17:47:11.623398 | orchestrator | Monday 02 June 2025 17:45:22 +0000 (0:00:00.328) 0:00:06.801 *********** 2025-06-02 17:47:11.623409 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:47:11.623418 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:47:11.623428 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:47:11.623438 | orchestrator | 2025-06-02 17:47:11.623447 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 17:47:11.623457 | orchestrator | Monday 02 June 2025 17:45:23 +0000 (0:00:00.291) 0:00:07.093 *********** 2025-06-02 17:47:11.623466 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:47:11.623475 | orchestrator | 2025-06-02 17:47:11.623485 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 17:47:11.623494 | orchestrator | Monday 02 June 2025 17:45:23 +0000 (0:00:00.141) 0:00:07.234 *********** 2025-06-02 17:47:11.623504 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:47:11.623513 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:47:11.623523 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:47:11.623532 | orchestrator | 2025-06-02 17:47:11.623542 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 17:47:11.623551 | orchestrator | Monday 02 June 2025 17:45:23 +0000 (0:00:00.294) 0:00:07.529 *********** 2025-06-02 17:47:11.623561 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:47:11.623570 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:47:11.623582 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:47:11.623598 | orchestrator | 2025-06-02 17:47:11.623615 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 17:47:11.623631 | orchestrator | Monday 02 June 2025 17:45:24 +0000 (0:00:00.526) 0:00:08.056 *********** 2025-06-02 17:47:11.623648 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:47:11.623665 | orchestrator | 2025-06-02 17:47:11.623682 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 17:47:11.623698 | orchestrator | Monday 02 June 2025 17:45:24 +0000 (0:00:00.146) 0:00:08.202 *********** 2025-06-02 17:47:11.623890 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:47:11.623921 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:47:11.623931 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:47:11.623974 | orchestrator | 2025-06-02 17:47:11.623985 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 17:47:11.623995 | orchestrator | Monday 02 June 2025 17:45:24 +0000 (0:00:00.271) 0:00:08.474 *********** 2025-06-02 17:47:11.624004 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:47:11.624014 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:47:11.624023 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:47:11.624033 | orchestrator | 2025-06-02 17:47:11.624043 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 17:47:11.624052 | orchestrator | Monday 02 June 2025 17:45:24 +0000 (0:00:00.303) 0:00:08.777 *********** 2025-06-02 17:47:11.624062 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:47:11.624071 | orchestrator | 2025-06-02 17:47:11.624081 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 17:47:11.624091 | orchestrator | Monday 02 June 2025 17:45:25 +0000 (0:00:00.112) 0:00:08.890 *********** 2025-06-02 17:47:11.624100 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:47:11.624110 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:47:11.624127 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:47:11.624136 | orchestrator | 2025-06-02 17:47:11.624146 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 17:47:11.624156 | orchestrator | Monday 02 June 2025 17:45:25 +0000 (0:00:00.519) 0:00:09.410 *********** 2025-06-02 17:47:11.624165 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:47:11.624188 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:47:11.624198 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:47:11.624207 | orchestrator | 2025-06-02 17:47:11.624217 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 17:47:11.624227 | orchestrator | Monday 02 June 2025 17:45:25 +0000 (0:00:00.327) 0:00:09.737 *********** 2025-06-02 17:47:11.624236 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:47:11.624246 | orchestrator | 2025-06-02 17:47:11.624255 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 17:47:11.624265 | orchestrator | Monday 02 June 2025 17:45:26 +0000 (0:00:00.131) 0:00:09.868 *********** 2025-06-02 17:47:11.624275 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:47:11.624284 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:47:11.624294 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:47:11.624304 | orchestrator | 2025-06-02 17:47:11.624313 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 17:47:11.624323 | orchestrator | Monday 02 June 2025 17:45:26 +0000 (0:00:00.297) 0:00:10.166 *********** 2025-06-02 17:47:11.624332 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:47:11.624342 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:47:11.624351 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:47:11.624361 | orchestrator | 2025-06-02 17:47:11.624371 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 17:47:11.624380 | orchestrator | Monday 02 June 2025 17:45:26 +0000 (0:00:00.299) 0:00:10.465 *********** 2025-06-02 17:47:11.624390 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:47:11.624399 | orchestrator | 2025-06-02 17:47:11.624409 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 17:47:11.624418 | orchestrator | Monday 02 June 2025 17:45:26 +0000 (0:00:00.120) 0:00:10.586 *********** 2025-06-02 17:47:11.624428 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:47:11.624437 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:47:11.624447 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:47:11.624457 | orchestrator | 2025-06-02 17:47:11.624466 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 17:47:11.624476 | orchestrator | Monday 02 June 2025 17:45:27 +0000 (0:00:00.505) 0:00:11.092 *********** 2025-06-02 17:47:11.624486 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:47:11.624495 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:47:11.624511 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:47:11.624520 | orchestrator | 2025-06-02 17:47:11.624530 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 17:47:11.624539 | orchestrator | Monday 02 June 2025 17:45:27 +0000 (0:00:00.302) 0:00:11.394 *********** 2025-06-02 17:47:11.624549 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:47:11.624559 | orchestrator | 2025-06-02 17:47:11.624568 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 17:47:11.624578 | orchestrator | Monday 02 June 2025 17:45:27 +0000 (0:00:00.126) 0:00:11.521 *********** 2025-06-02 17:47:11.624587 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:47:11.624597 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:47:11.624606 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:47:11.624616 | orchestrator | 2025-06-02 17:47:11.624625 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 17:47:11.624635 | orchestrator | Monday 02 June 2025 17:45:27 +0000 (0:00:00.281) 0:00:11.803 *********** 2025-06-02 17:47:11.624644 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:47:11.624654 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:47:11.624663 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:47:11.624673 | orchestrator | 2025-06-02 17:47:11.624682 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 17:47:11.624692 | orchestrator | Monday 02 June 2025 17:45:28 +0000 (0:00:00.516) 0:00:12.319 *********** 2025-06-02 17:47:11.624701 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:47:11.624756 | orchestrator | 2025-06-02 17:47:11.624766 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 17:47:11.624776 | orchestrator | Monday 02 June 2025 17:45:28 +0000 (0:00:00.134) 0:00:12.453 *********** 2025-06-02 17:47:11.624785 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:47:11.624795 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:47:11.624804 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:47:11.624814 | orchestrator | 2025-06-02 17:47:11.624824 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-06-02 17:47:11.624833 | orchestrator | Monday 02 June 2025 17:45:28 +0000 (0:00:00.288) 0:00:12.742 *********** 2025-06-02 17:47:11.624843 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:47:11.624852 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:47:11.624862 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:47:11.624871 | orchestrator | 2025-06-02 17:47:11.624881 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-06-02 17:47:11.624890 | orchestrator | Monday 02 June 2025 17:45:30 +0000 (0:00:01.579) 0:00:14.322 *********** 2025-06-02 17:47:11.624900 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-02 17:47:11.624910 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-02 17:47:11.624919 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-02 17:47:11.624929 | orchestrator | 2025-06-02 17:47:11.624938 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-06-02 17:47:11.624948 | orchestrator | Monday 02 June 2025 17:45:32 +0000 (0:00:01.858) 0:00:16.180 *********** 2025-06-02 17:47:11.624958 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-02 17:47:11.624968 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-02 17:47:11.624989 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-02 17:47:11.624999 | orchestrator | 2025-06-02 17:47:11.625009 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-06-02 17:47:11.625025 | orchestrator | Monday 02 June 2025 17:45:34 +0000 (0:00:02.219) 0:00:18.400 *********** 2025-06-02 17:47:11.625034 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-02 17:47:11.625051 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-02 17:47:11.625061 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-02 17:47:11.625070 | orchestrator | 2025-06-02 17:47:11.625080 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-06-02 17:47:11.625089 | orchestrator | Monday 02 June 2025 17:45:36 +0000 (0:00:01.555) 0:00:19.955 *********** 2025-06-02 17:47:11.625099 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:47:11.625108 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:47:11.625118 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:47:11.625127 | orchestrator | 2025-06-02 17:47:11.625137 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-06-02 17:47:11.625147 | orchestrator | Monday 02 June 2025 17:45:36 +0000 (0:00:00.319) 0:00:20.274 *********** 2025-06-02 17:47:11.625156 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:47:11.625166 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:47:11.625175 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:47:11.625185 | orchestrator | 2025-06-02 17:47:11.625194 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-02 17:47:11.625204 | orchestrator | Monday 02 June 2025 17:45:36 +0000 (0:00:00.293) 0:00:20.568 *********** 2025-06-02 17:47:11.625214 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:47:11.625223 | orchestrator | 2025-06-02 17:47:11.625233 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-06-02 17:47:11.625243 | orchestrator | Monday 02 June 2025 17:45:37 +0000 (0:00:00.799) 0:00:21.367 *********** 2025-06-02 17:47:11.625256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 17:47:11.625285 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 17:47:11.625302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 17:47:11.625319 | orchestrator | 2025-06-02 17:47:11.625328 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-06-02 17:47:11.625342 | orchestrator | Monday 02 June 2025 17:45:39 +0000 (0:00:01.553) 0:00:22.921 *********** 2025-06-02 17:47:11.625361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 17:47:11.625373 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:47:11.625395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 17:47:11.625415 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:47:11.625426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 17:47:11.625436 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:47:11.625446 | orchestrator | 2025-06-02 17:47:11.625456 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-06-02 17:47:11.625466 | orchestrator | Monday 02 June 2025 17:45:39 +0000 (0:00:00.666) 0:00:23.588 *********** 2025-06-02 17:47:11.625489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 17:47:11.625506 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:47:11.625517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 17:47:11.625527 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:47:11.625549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 17:47:11.625566 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:47:11.625576 | orchestrator | 2025-06-02 17:47:11.625586 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-06-02 17:47:11.625595 | orchestrator | Monday 02 June 2025 17:45:40 +0000 (0:00:01.139) 0:00:24.728 *********** 2025-06-02 17:47:11.625606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 17:47:11.625635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 17:47:11.625647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 17:47:11.625664 | orchestrator | 2025-06-02 17:47:11.625674 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-02 17:47:11.625683 | orchestrator | Monday 02 June 2025 17:45:42 +0000 (0:00:01.339) 0:00:26.067 *********** 2025-06-02 17:47:11.625693 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:47:11.625702 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:47:11.625730 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:47:11.625740 | orchestrator | 2025-06-02 17:47:11.625758 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-02 17:47:11.625768 | orchestrator | Monday 02 June 2025 17:45:42 +0000 (0:00:00.293) 0:00:26.360 *********** 2025-06-02 17:47:11.625784 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:47:11.625794 | orchestrator | 2025-06-02 17:47:11.625804 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-06-02 17:47:11.625813 | orchestrator | Monday 02 June 2025 17:45:43 +0000 (0:00:00.746) 0:00:27.106 *********** 2025-06-02 17:47:11.625823 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:47:11.625832 | orchestrator | 2025-06-02 17:47:11.625842 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-06-02 17:47:11.625852 | orchestrator | Monday 02 June 2025 17:45:45 +0000 (0:00:02.312) 0:00:29.419 *********** 2025-06-02 17:47:11.625861 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:47:11.625871 | orchestrator | 2025-06-02 17:47:11.625880 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-06-02 17:47:11.625890 | orchestrator | Monday 02 June 2025 17:45:47 +0000 (0:00:02.110) 0:00:31.529 *********** 2025-06-02 17:47:11.625899 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:47:11.625909 | orchestrator | 2025-06-02 17:47:11.625918 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-02 17:47:11.625928 | orchestrator | Monday 02 June 2025 17:46:03 +0000 (0:00:15.629) 0:00:47.158 *********** 2025-06-02 17:47:11.625937 | orchestrator | 2025-06-02 17:47:11.625947 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-02 17:47:11.625956 | orchestrator | Monday 02 June 2025 17:46:03 +0000 (0:00:00.074) 0:00:47.233 *********** 2025-06-02 17:47:11.625966 | orchestrator | 2025-06-02 17:47:11.625975 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-02 17:47:11.625985 | orchestrator | Monday 02 June 2025 17:46:03 +0000 (0:00:00.070) 0:00:47.303 *********** 2025-06-02 17:47:11.625994 | orchestrator | 2025-06-02 17:47:11.626004 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-06-02 17:47:11.626058 | orchestrator | Monday 02 June 2025 17:46:03 +0000 (0:00:00.067) 0:00:47.370 *********** 2025-06-02 17:47:11.626070 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:47:11.626080 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:47:11.626090 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:47:11.626099 | orchestrator | 2025-06-02 17:47:11.626109 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:47:11.626119 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-06-02 17:47:11.626137 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-06-02 17:47:11.626147 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-06-02 17:47:11.626157 | orchestrator | 2025-06-02 17:47:11.626166 | orchestrator | 2025-06-02 17:47:11.626176 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:47:11.626186 | orchestrator | Monday 02 June 2025 17:47:08 +0000 (0:01:05.431) 0:01:52.802 *********** 2025-06-02 17:47:11.626195 | orchestrator | =============================================================================== 2025-06-02 17:47:11.626205 | orchestrator | horizon : Restart horizon container ------------------------------------ 65.43s 2025-06-02 17:47:11.626215 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 15.63s 2025-06-02 17:47:11.626224 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.31s 2025-06-02 17:47:11.626234 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.22s 2025-06-02 17:47:11.626243 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.11s 2025-06-02 17:47:11.626253 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.86s 2025-06-02 17:47:11.626262 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.58s 2025-06-02 17:47:11.626272 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.56s 2025-06-02 17:47:11.626281 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.55s 2025-06-02 17:47:11.626291 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.34s 2025-06-02 17:47:11.626301 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.22s 2025-06-02 17:47:11.626310 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.14s 2025-06-02 17:47:11.626320 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.89s 2025-06-02 17:47:11.626329 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.80s 2025-06-02 17:47:11.626339 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.75s 2025-06-02 17:47:11.626348 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.67s 2025-06-02 17:47:11.626358 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.54s 2025-06-02 17:47:11.626368 | orchestrator | horizon : Update policy file name --------------------------------------- 0.53s 2025-06-02 17:47:11.626377 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.52s 2025-06-02 17:47:11.626387 | orchestrator | horizon : Update policy file name --------------------------------------- 0.52s 2025-06-02 17:47:11.626401 | orchestrator | 2025-06-02 17:47:11 | INFO  | Task 2ea2dc18-9881-4e0f-a7fe-3b2c1f963f4e is in state STARTED 2025-06-02 17:47:11.626412 | orchestrator | 2025-06-02 17:47:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:47:14.667959 | orchestrator | 2025-06-02 17:47:14 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:47:14.671558 | orchestrator | 2025-06-02 17:47:14 | INFO  | Task 2ea2dc18-9881-4e0f-a7fe-3b2c1f963f4e is in state STARTED 2025-06-02 17:47:14.671643 | orchestrator | 2025-06-02 17:47:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:47:17.720033 | orchestrator | 2025-06-02 17:47:17 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:47:17.721922 | orchestrator | 2025-06-02 17:47:17 | INFO  | Task 2ea2dc18-9881-4e0f-a7fe-3b2c1f963f4e is in state STARTED 2025-06-02 17:47:17.721982 | orchestrator | 2025-06-02 17:47:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:47:20.763107 | orchestrator | 2025-06-02 17:47:20 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:47:20.764328 | orchestrator | 2025-06-02 17:47:20 | INFO  | Task 2ea2dc18-9881-4e0f-a7fe-3b2c1f963f4e is in state STARTED 2025-06-02 17:47:20.764389 | orchestrator | 2025-06-02 17:47:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:47:23.806753 | orchestrator | 2025-06-02 17:47:23 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:47:23.809118 | orchestrator | 2025-06-02 17:47:23 | INFO  | Task 2ea2dc18-9881-4e0f-a7fe-3b2c1f963f4e is in state STARTED 2025-06-02 17:47:23.809174 | orchestrator | 2025-06-02 17:47:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:47:26.854251 | orchestrator | 2025-06-02 17:47:26 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:47:26.855818 | orchestrator | 2025-06-02 17:47:26 | INFO  | Task 2ea2dc18-9881-4e0f-a7fe-3b2c1f963f4e is in state STARTED 2025-06-02 17:47:26.855861 | orchestrator | 2025-06-02 17:47:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:47:29.898562 | orchestrator | 2025-06-02 17:47:29 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:47:29.898742 | orchestrator | 2025-06-02 17:47:29 | INFO  | Task 2ea2dc18-9881-4e0f-a7fe-3b2c1f963f4e is in state STARTED 2025-06-02 17:47:29.898762 | orchestrator | 2025-06-02 17:47:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:47:32.948154 | orchestrator | 2025-06-02 17:47:32 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:47:32.950241 | orchestrator | 2025-06-02 17:47:32 | INFO  | Task 2ea2dc18-9881-4e0f-a7fe-3b2c1f963f4e is in state STARTED 2025-06-02 17:47:32.951300 | orchestrator | 2025-06-02 17:47:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:47:35.992633 | orchestrator | 2025-06-02 17:47:35 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:47:35.995118 | orchestrator | 2025-06-02 17:47:35 | INFO  | Task 2ea2dc18-9881-4e0f-a7fe-3b2c1f963f4e is in state STARTED 2025-06-02 17:47:35.995181 | orchestrator | 2025-06-02 17:47:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:47:39.038562 | orchestrator | 2025-06-02 17:47:39 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:47:39.039844 | orchestrator | 2025-06-02 17:47:39 | INFO  | Task 2ea2dc18-9881-4e0f-a7fe-3b2c1f963f4e is in state STARTED 2025-06-02 17:47:39.039911 | orchestrator | 2025-06-02 17:47:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:47:42.093323 | orchestrator | 2025-06-02 17:47:42 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:47:42.096898 | orchestrator | 2025-06-02 17:47:42 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:47:42.096973 | orchestrator | 2025-06-02 17:47:42 | INFO  | Task 46884b6b-d03f-4914-b1d3-e52570b2033d is in state STARTED 2025-06-02 17:47:42.098993 | orchestrator | 2025-06-02 17:47:42 | INFO  | Task 44774b6c-811f-4bdf-9a2b-2cb0e78179c7 is in state STARTED 2025-06-02 17:47:42.101272 | orchestrator | 2025-06-02 17:47:42 | INFO  | Task 2ea2dc18-9881-4e0f-a7fe-3b2c1f963f4e is in state SUCCESS 2025-06-02 17:47:42.101306 | orchestrator | 2025-06-02 17:47:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:47:45.156930 | orchestrator | 2025-06-02 17:47:45 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:47:45.157734 | orchestrator | 2025-06-02 17:47:45 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:47:45.163833 | orchestrator | 2025-06-02 17:47:45 | INFO  | Task 46884b6b-d03f-4914-b1d3-e52570b2033d is in state STARTED 2025-06-02 17:47:45.165120 | orchestrator | 2025-06-02 17:47:45 | INFO  | Task 44774b6c-811f-4bdf-9a2b-2cb0e78179c7 is in state STARTED 2025-06-02 17:47:45.165141 | orchestrator | 2025-06-02 17:47:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:47:48.224734 | orchestrator | 2025-06-02 17:47:48 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:47:48.224943 | orchestrator | 2025-06-02 17:47:48 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:47:48.224962 | orchestrator | 2025-06-02 17:47:48 | INFO  | Task 46884b6b-d03f-4914-b1d3-e52570b2033d is in state STARTED 2025-06-02 17:47:48.224973 | orchestrator | 2025-06-02 17:47:48 | INFO  | Task 44774b6c-811f-4bdf-9a2b-2cb0e78179c7 is in state SUCCESS 2025-06-02 17:47:48.224998 | orchestrator | 2025-06-02 17:47:48 | INFO  | Task 31bf779f-d013-4b5d-8b81-c1b01b0695d2 is in state STARTED 2025-06-02 17:47:48.225452 | orchestrator | 2025-06-02 17:47:48 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:47:48.225474 | orchestrator | 2025-06-02 17:47:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:47:51.283275 | orchestrator | 2025-06-02 17:47:51 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:47:51.283735 | orchestrator | 2025-06-02 17:47:51 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:47:51.283996 | orchestrator | 2025-06-02 17:47:51 | INFO  | Task 46884b6b-d03f-4914-b1d3-e52570b2033d is in state STARTED 2025-06-02 17:47:51.284988 | orchestrator | 2025-06-02 17:47:51 | INFO  | Task 31bf779f-d013-4b5d-8b81-c1b01b0695d2 is in state STARTED 2025-06-02 17:47:51.288613 | orchestrator | 2025-06-02 17:47:51 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:47:51.288735 | orchestrator | 2025-06-02 17:47:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:47:54.339746 | orchestrator | 2025-06-02 17:47:54 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:47:54.345582 | orchestrator | 2025-06-02 17:47:54 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:47:54.345942 | orchestrator | 2025-06-02 17:47:54 | INFO  | Task 46884b6b-d03f-4914-b1d3-e52570b2033d is in state STARTED 2025-06-02 17:47:54.347558 | orchestrator | 2025-06-02 17:47:54 | INFO  | Task 31bf779f-d013-4b5d-8b81-c1b01b0695d2 is in state STARTED 2025-06-02 17:47:54.347587 | orchestrator | 2025-06-02 17:47:54 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:47:54.347595 | orchestrator | 2025-06-02 17:47:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:47:57.388919 | orchestrator | 2025-06-02 17:47:57 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:47:57.390250 | orchestrator | 2025-06-02 17:47:57 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:47:57.392188 | orchestrator | 2025-06-02 17:47:57 | INFO  | Task 46884b6b-d03f-4914-b1d3-e52570b2033d is in state STARTED 2025-06-02 17:47:57.393419 | orchestrator | 2025-06-02 17:47:57 | INFO  | Task 31bf779f-d013-4b5d-8b81-c1b01b0695d2 is in state STARTED 2025-06-02 17:47:57.395632 | orchestrator | 2025-06-02 17:47:57 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:47:57.395877 | orchestrator | 2025-06-02 17:47:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:48:00.447431 | orchestrator | 2025-06-02 17:48:00 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:48:00.450372 | orchestrator | 2025-06-02 17:48:00 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:48:00.453310 | orchestrator | 2025-06-02 17:48:00 | INFO  | Task 46884b6b-d03f-4914-b1d3-e52570b2033d is in state STARTED 2025-06-02 17:48:00.456266 | orchestrator | 2025-06-02 17:48:00 | INFO  | Task 31bf779f-d013-4b5d-8b81-c1b01b0695d2 is in state STARTED 2025-06-02 17:48:00.458955 | orchestrator | 2025-06-02 17:48:00 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:48:00.459018 | orchestrator | 2025-06-02 17:48:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:48:03.496520 | orchestrator | 2025-06-02 17:48:03 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:48:03.497028 | orchestrator | 2025-06-02 17:48:03 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:48:03.500017 | orchestrator | 2025-06-02 17:48:03 | INFO  | Task 46884b6b-d03f-4914-b1d3-e52570b2033d is in state STARTED 2025-06-02 17:48:03.501894 | orchestrator | 2025-06-02 17:48:03 | INFO  | Task 31bf779f-d013-4b5d-8b81-c1b01b0695d2 is in state STARTED 2025-06-02 17:48:03.505123 | orchestrator | 2025-06-02 17:48:03 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:48:03.505160 | orchestrator | 2025-06-02 17:48:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:48:06.558321 | orchestrator | 2025-06-02 17:48:06 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state STARTED 2025-06-02 17:48:06.558918 | orchestrator | 2025-06-02 17:48:06 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:48:06.560488 | orchestrator | 2025-06-02 17:48:06 | INFO  | Task 46884b6b-d03f-4914-b1d3-e52570b2033d is in state STARTED 2025-06-02 17:48:06.564336 | orchestrator | 2025-06-02 17:48:06 | INFO  | Task 31bf779f-d013-4b5d-8b81-c1b01b0695d2 is in state STARTED 2025-06-02 17:48:06.565622 | orchestrator | 2025-06-02 17:48:06 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:48:06.565745 | orchestrator | 2025-06-02 17:48:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:48:09.617199 | orchestrator | 2025-06-02 17:48:09 | INFO  | Task b4b94728-db1a-4d58-88d8-da507415b830 is in state SUCCESS 2025-06-02 17:48:09.619661 | orchestrator | 2025-06-02 17:48:09.619760 | orchestrator | 2025-06-02 17:48:09.619778 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-06-02 17:48:09.619791 | orchestrator | 2025-06-02 17:48:09.619803 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-06-02 17:48:09.619815 | orchestrator | Monday 02 June 2025 17:46:44 +0000 (0:00:00.251) 0:00:00.251 *********** 2025-06-02 17:48:09.619827 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-06-02 17:48:09.619840 | orchestrator | 2025-06-02 17:48:09.619851 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-06-02 17:48:09.619863 | orchestrator | Monday 02 June 2025 17:46:44 +0000 (0:00:00.229) 0:00:00.480 *********** 2025-06-02 17:48:09.619874 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-06-02 17:48:09.619886 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-06-02 17:48:09.619897 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-06-02 17:48:09.619909 | orchestrator | 2025-06-02 17:48:09.619920 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-06-02 17:48:09.619931 | orchestrator | Monday 02 June 2025 17:46:45 +0000 (0:00:01.254) 0:00:01.735 *********** 2025-06-02 17:48:09.619971 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-06-02 17:48:09.619984 | orchestrator | 2025-06-02 17:48:09.620009 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-06-02 17:48:09.620021 | orchestrator | Monday 02 June 2025 17:46:46 +0000 (0:00:01.204) 0:00:02.939 *********** 2025-06-02 17:48:09.620032 | orchestrator | changed: [testbed-manager] 2025-06-02 17:48:09.620043 | orchestrator | 2025-06-02 17:48:09.620054 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-06-02 17:48:09.620065 | orchestrator | Monday 02 June 2025 17:46:47 +0000 (0:00:01.045) 0:00:03.985 *********** 2025-06-02 17:48:09.620076 | orchestrator | changed: [testbed-manager] 2025-06-02 17:48:09.620087 | orchestrator | 2025-06-02 17:48:09.620097 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-06-02 17:48:09.620142 | orchestrator | Monday 02 June 2025 17:46:48 +0000 (0:00:00.961) 0:00:04.947 *********** 2025-06-02 17:48:09.620263 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-06-02 17:48:09.620327 | orchestrator | ok: [testbed-manager] 2025-06-02 17:48:09.620341 | orchestrator | 2025-06-02 17:48:09.620367 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-06-02 17:48:09.620378 | orchestrator | Monday 02 June 2025 17:47:30 +0000 (0:00:41.480) 0:00:46.428 *********** 2025-06-02 17:48:09.620389 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-06-02 17:48:09.620400 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-06-02 17:48:09.620411 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-06-02 17:48:09.620422 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-06-02 17:48:09.620432 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-06-02 17:48:09.620443 | orchestrator | 2025-06-02 17:48:09.620454 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-06-02 17:48:09.620465 | orchestrator | Monday 02 June 2025 17:47:34 +0000 (0:00:04.135) 0:00:50.563 *********** 2025-06-02 17:48:09.620476 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-06-02 17:48:09.620486 | orchestrator | 2025-06-02 17:48:09.620497 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-06-02 17:48:09.620508 | orchestrator | Monday 02 June 2025 17:47:34 +0000 (0:00:00.459) 0:00:51.023 *********** 2025-06-02 17:48:09.620519 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:48:09.620530 | orchestrator | 2025-06-02 17:48:09.620540 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-06-02 17:48:09.620565 | orchestrator | Monday 02 June 2025 17:47:35 +0000 (0:00:00.153) 0:00:51.176 *********** 2025-06-02 17:48:09.620576 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:48:09.620587 | orchestrator | 2025-06-02 17:48:09.620598 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-06-02 17:48:09.620609 | orchestrator | Monday 02 June 2025 17:47:35 +0000 (0:00:00.312) 0:00:51.490 *********** 2025-06-02 17:48:09.620666 | orchestrator | changed: [testbed-manager] 2025-06-02 17:48:09.620686 | orchestrator | 2025-06-02 17:48:09.620698 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-06-02 17:48:09.620709 | orchestrator | Monday 02 June 2025 17:47:37 +0000 (0:00:01.865) 0:00:53.355 *********** 2025-06-02 17:48:09.620720 | orchestrator | changed: [testbed-manager] 2025-06-02 17:48:09.620731 | orchestrator | 2025-06-02 17:48:09.620741 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-06-02 17:48:09.620752 | orchestrator | Monday 02 June 2025 17:47:37 +0000 (0:00:00.756) 0:00:54.111 *********** 2025-06-02 17:48:09.620763 | orchestrator | changed: [testbed-manager] 2025-06-02 17:48:09.620774 | orchestrator | 2025-06-02 17:48:09.620784 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-06-02 17:48:09.620795 | orchestrator | Monday 02 June 2025 17:47:38 +0000 (0:00:00.621) 0:00:54.733 *********** 2025-06-02 17:48:09.620817 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-06-02 17:48:09.620828 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-06-02 17:48:09.620839 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-06-02 17:48:09.620849 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-06-02 17:48:09.620860 | orchestrator | 2025-06-02 17:48:09.620871 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:48:09.620882 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:48:09.620894 | orchestrator | 2025-06-02 17:48:09.620904 | orchestrator | 2025-06-02 17:48:09.620983 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:48:09.620997 | orchestrator | Monday 02 June 2025 17:47:40 +0000 (0:00:01.475) 0:00:56.208 *********** 2025-06-02 17:48:09.621008 | orchestrator | =============================================================================== 2025-06-02 17:48:09.621018 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 41.48s 2025-06-02 17:48:09.621029 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.14s 2025-06-02 17:48:09.621041 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.87s 2025-06-02 17:48:09.621052 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.48s 2025-06-02 17:48:09.621063 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.25s 2025-06-02 17:48:09.621074 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.20s 2025-06-02 17:48:09.621084 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.05s 2025-06-02 17:48:09.621095 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.96s 2025-06-02 17:48:09.621106 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.76s 2025-06-02 17:48:09.621118 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.62s 2025-06-02 17:48:09.621129 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.46s 2025-06-02 17:48:09.621140 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.31s 2025-06-02 17:48:09.621151 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.23s 2025-06-02 17:48:09.621162 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.15s 2025-06-02 17:48:09.621173 | orchestrator | 2025-06-02 17:48:09.621184 | orchestrator | 2025-06-02 17:48:09.621195 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 17:48:09.621206 | orchestrator | 2025-06-02 17:48:09.621217 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 17:48:09.621228 | orchestrator | Monday 02 June 2025 17:47:44 +0000 (0:00:00.183) 0:00:00.183 *********** 2025-06-02 17:48:09.621239 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:48:09.621250 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:48:09.621261 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:48:09.621272 | orchestrator | 2025-06-02 17:48:09.621283 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 17:48:09.621294 | orchestrator | Monday 02 June 2025 17:47:44 +0000 (0:00:00.314) 0:00:00.498 *********** 2025-06-02 17:48:09.621305 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-06-02 17:48:09.621316 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-06-02 17:48:09.621327 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-06-02 17:48:09.621338 | orchestrator | 2025-06-02 17:48:09.621349 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-06-02 17:48:09.621360 | orchestrator | 2025-06-02 17:48:09.621371 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-06-02 17:48:09.621382 | orchestrator | Monday 02 June 2025 17:47:45 +0000 (0:00:00.793) 0:00:01.291 *********** 2025-06-02 17:48:09.621401 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:48:09.621412 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:48:09.621423 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:48:09.621434 | orchestrator | 2025-06-02 17:48:09.621445 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:48:09.621457 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:48:09.621468 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:48:09.621485 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:48:09.621497 | orchestrator | 2025-06-02 17:48:09.621508 | orchestrator | 2025-06-02 17:48:09.621519 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:48:09.621530 | orchestrator | Monday 02 June 2025 17:47:46 +0000 (0:00:00.795) 0:00:02.087 *********** 2025-06-02 17:48:09.621541 | orchestrator | =============================================================================== 2025-06-02 17:48:09.621553 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.80s 2025-06-02 17:48:09.621564 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.79s 2025-06-02 17:48:09.621575 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2025-06-02 17:48:09.621586 | orchestrator | 2025-06-02 17:48:09.621597 | orchestrator | 2025-06-02 17:48:09.621608 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 17:48:09.621619 | orchestrator | 2025-06-02 17:48:09.621658 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 17:48:09.621672 | orchestrator | Monday 02 June 2025 17:45:16 +0000 (0:00:00.271) 0:00:00.271 *********** 2025-06-02 17:48:09.621683 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:48:09.621693 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:48:09.621704 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:48:09.621715 | orchestrator | 2025-06-02 17:48:09.621726 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 17:48:09.621737 | orchestrator | Monday 02 June 2025 17:45:16 +0000 (0:00:00.378) 0:00:00.650 *********** 2025-06-02 17:48:09.621748 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-06-02 17:48:09.621759 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-06-02 17:48:09.621769 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-06-02 17:48:09.621780 | orchestrator | 2025-06-02 17:48:09.621791 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-06-02 17:48:09.621802 | orchestrator | 2025-06-02 17:48:09.621847 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-02 17:48:09.621860 | orchestrator | Monday 02 June 2025 17:45:17 +0000 (0:00:00.431) 0:00:01.082 *********** 2025-06-02 17:48:09.621871 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:48:09.621882 | orchestrator | 2025-06-02 17:48:09.621892 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-06-02 17:48:09.621903 | orchestrator | Monday 02 June 2025 17:45:17 +0000 (0:00:00.668) 0:00:01.750 *********** 2025-06-02 17:48:09.621920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 17:48:09.621948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 17:48:09.621970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 17:48:09.622077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 17:48:09.622109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 17:48:09.622130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 17:48:09.622158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 17:48:09.622171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 17:48:09.622188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 17:48:09.622200 | orchestrator | 2025-06-02 17:48:09.622211 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-06-02 17:48:09.622222 | orchestrator | Monday 02 June 2025 17:45:19 +0000 (0:00:01.783) 0:00:03.534 *********** 2025-06-02 17:48:09.622233 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-06-02 17:48:09.622244 | orchestrator | 2025-06-02 17:48:09.622255 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-06-02 17:48:09.622266 | orchestrator | Monday 02 June 2025 17:45:20 +0000 (0:00:00.969) 0:00:04.504 *********** 2025-06-02 17:48:09.622277 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:48:09.622288 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:48:09.622299 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:48:09.622310 | orchestrator | 2025-06-02 17:48:09.622321 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-06-02 17:48:09.622332 | orchestrator | Monday 02 June 2025 17:45:21 +0000 (0:00:00.517) 0:00:05.022 *********** 2025-06-02 17:48:09.622342 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 17:48:09.622353 | orchestrator | 2025-06-02 17:48:09.622364 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-02 17:48:09.622383 | orchestrator | Monday 02 June 2025 17:45:21 +0000 (0:00:00.675) 0:00:05.698 *********** 2025-06-02 17:48:09.622394 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:48:09.622405 | orchestrator | 2025-06-02 17:48:09.622416 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-06-02 17:48:09.622427 | orchestrator | Monday 02 June 2025 17:45:22 +0000 (0:00:00.593) 0:00:06.292 *********** 2025-06-02 17:48:09.622447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 17:48:09.622459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 17:48:09.622477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 17:48:09.622490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 17:48:09.622512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 17:48:09.622532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 17:48:09.622544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 17:48:09.622555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 17:48:09.622571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 17:48:09.622583 | orchestrator | 2025-06-02 17:48:09.622594 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-06-02 17:48:09.622605 | orchestrator | Monday 02 June 2025 17:45:26 +0000 (0:00:03.703) 0:00:09.996 *********** 2025-06-02 17:48:09.622656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 17:48:09.622680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 17:48:09.622691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 17:48:09.622702 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:48:09.622714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 17:48:09.622752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 17:48:09.622766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 17:48:09.622784 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:48:09.622806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 17:48:09.622819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 17:48:09.622830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 17:48:09.622841 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:48:09.622852 | orchestrator | 2025-06-02 17:48:09.622864 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-06-02 17:48:09.622875 | orchestrator | Monday 02 June 2025 17:45:26 +0000 (0:00:00.613) 0:00:10.609 *********** 2025-06-02 17:48:09.622887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 17:48:09.622899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 17:48:09.622926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 17:48:09.622938 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:48:09.622988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 17:48:09.623002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 17:48:09.623014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 17:48:09.623025 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:48:09.623042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 17:48:09.623069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 17:48:09.623082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 17:48:09.623093 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:48:09.623105 | orchestrator | 2025-06-02 17:48:09.623116 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-06-02 17:48:09.623127 | orchestrator | Monday 02 June 2025 17:45:27 +0000 (0:00:00.704) 0:00:11.314 *********** 2025-06-02 17:48:09.623139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 17:48:09.623156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 17:48:09.623183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 17:48:09.623196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 17:48:09.623208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 17:48:09.623219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 17:48:09.623231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 17:48:09.623248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 17:48:09.623268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 17:48:09.623280 | orchestrator | 2025-06-02 17:48:09.623291 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-06-02 17:48:09.623302 | orchestrator | Monday 02 June 2025 17:45:31 +0000 (0:00:03.595) 0:00:14.910 *********** 2025-06-02 17:48:09.623322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 17:48:09.623334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 17:48:09.623346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 17:48:09.623371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 17:48:09.623391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 17:48:09.623404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 17:48:09.623415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 17:48:09.623427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 17:48:09.623438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 17:48:09.623456 | orchestrator | 2025-06-02 17:48:09.623473 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-06-02 17:48:09.623485 | orchestrator | Monday 02 June 2025 17:45:35 +0000 (0:00:04.767) 0:00:19.678 *********** 2025-06-02 17:48:09.623496 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:48:09.623508 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:48:09.623520 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:48:09.623530 | orchestrator | 2025-06-02 17:48:09.623541 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-06-02 17:48:09.623553 | orchestrator | Monday 02 June 2025 17:45:37 +0000 (0:00:01.443) 0:00:21.121 *********** 2025-06-02 17:48:09.623563 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:48:09.623575 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:48:09.623586 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:48:09.623597 | orchestrator | 2025-06-02 17:48:09.623608 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-06-02 17:48:09.623645 | orchestrator | Monday 02 June 2025 17:45:37 +0000 (0:00:00.503) 0:00:21.625 *********** 2025-06-02 17:48:09.623659 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:48:09.623670 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:48:09.623681 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:48:09.623691 | orchestrator | 2025-06-02 17:48:09.623702 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-06-02 17:48:09.623761 | orchestrator | Monday 02 June 2025 17:45:38 +0000 (0:00:00.521) 0:00:22.146 *********** 2025-06-02 17:48:09.623774 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:48:09.623785 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:48:09.623796 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:48:09.623807 | orchestrator | 2025-06-02 17:48:09.623818 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-06-02 17:48:09.623829 | orchestrator | Monday 02 June 2025 17:45:38 +0000 (0:00:00.323) 0:00:22.469 *********** 2025-06-02 17:48:09.623850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 17:48:09.623864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 17:48:09.623876 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 17:48:09.623907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 17:48:09.623926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 17:48:09.623938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 17:48:09.623950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 17:48:09.623962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 17:48:09.623980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 17:48:09.623991 | orchestrator | 2025-06-02 17:48:09.624003 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-02 17:48:09.624019 | orchestrator | Monday 02 June 2025 17:45:40 +0000 (0:00:02.269) 0:00:24.739 *********** 2025-06-02 17:48:09.624031 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:48:09.624042 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:48:09.624053 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:48:09.624064 | orchestrator | 2025-06-02 17:48:09.624075 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-06-02 17:48:09.624087 | orchestrator | Monday 02 June 2025 17:45:41 +0000 (0:00:00.284) 0:00:25.023 *********** 2025-06-02 17:48:09.624114 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-02 17:48:09.624127 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-02 17:48:09.624139 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-02 17:48:09.624149 | orchestrator | 2025-06-02 17:48:09.624160 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-06-02 17:48:09.624171 | orchestrator | Monday 02 June 2025 17:45:43 +0000 (0:00:02.025) 0:00:27.048 *********** 2025-06-02 17:48:09.624182 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 17:48:09.624193 | orchestrator | 2025-06-02 17:48:09.624204 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-06-02 17:48:09.624214 | orchestrator | Monday 02 June 2025 17:45:44 +0000 (0:00:00.941) 0:00:27.989 *********** 2025-06-02 17:48:09.624225 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:48:09.624236 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:48:09.624247 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:48:09.624257 | orchestrator | 2025-06-02 17:48:09.624268 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-06-02 17:48:09.624279 | orchestrator | Monday 02 June 2025 17:45:44 +0000 (0:00:00.543) 0:00:28.532 *********** 2025-06-02 17:48:09.624290 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-02 17:48:09.624307 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-02 17:48:09.624319 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 17:48:09.624329 | orchestrator | 2025-06-02 17:48:09.624340 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-06-02 17:48:09.624351 | orchestrator | Monday 02 June 2025 17:45:45 +0000 (0:00:01.146) 0:00:29.679 *********** 2025-06-02 17:48:09.624362 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:48:09.624373 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:48:09.624384 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:48:09.624395 | orchestrator | 2025-06-02 17:48:09.624407 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-06-02 17:48:09.624424 | orchestrator | Monday 02 June 2025 17:45:46 +0000 (0:00:00.283) 0:00:29.963 *********** 2025-06-02 17:48:09.624435 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-02 17:48:09.624446 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-02 17:48:09.624457 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-02 17:48:09.624468 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-02 17:48:09.624479 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-02 17:48:09.624490 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-02 17:48:09.624501 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-02 17:48:09.624512 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-02 17:48:09.624522 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-02 17:48:09.624533 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-02 17:48:09.624544 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-02 17:48:09.624555 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-02 17:48:09.624565 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-02 17:48:09.624577 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-02 17:48:09.624587 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-02 17:48:09.624598 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-02 17:48:09.624609 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-02 17:48:09.624686 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-02 17:48:09.624701 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-02 17:48:09.624712 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-02 17:48:09.624723 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-02 17:48:09.624735 | orchestrator | 2025-06-02 17:48:09.624746 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-06-02 17:48:09.624764 | orchestrator | Monday 02 June 2025 17:45:55 +0000 (0:00:09.348) 0:00:39.311 *********** 2025-06-02 17:48:09.624775 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-02 17:48:09.624787 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-02 17:48:09.624797 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-02 17:48:09.624808 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-02 17:48:09.624820 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-02 17:48:09.624831 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-02 17:48:09.624842 | orchestrator | 2025-06-02 17:48:09.624853 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-06-02 17:48:09.624862 | orchestrator | Monday 02 June 2025 17:45:58 +0000 (0:00:02.763) 0:00:42.075 *********** 2025-06-02 17:48:09.624889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 17:48:09.624901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 17:48:09.624913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 17:48:09.624928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 17:48:09.624939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 17:48:09.624966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 17:48:09.624977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 17:48:09.624987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 17:48:09.624997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 17:48:09.625007 | orchestrator | 2025-06-02 17:48:09.625017 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-02 17:48:09.625027 | orchestrator | Monday 02 June 2025 17:46:00 +0000 (0:00:02.464) 0:00:44.539 *********** 2025-06-02 17:48:09.625037 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:48:09.625047 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:48:09.625057 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:48:09.625066 | orchestrator | 2025-06-02 17:48:09.625076 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-06-02 17:48:09.625086 | orchestrator | Monday 02 June 2025 17:46:00 +0000 (0:00:00.280) 0:00:44.820 *********** 2025-06-02 17:48:09.625096 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:48:09.625105 | orchestrator | 2025-06-02 17:48:09.625115 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-06-02 17:48:09.625131 | orchestrator | Monday 02 June 2025 17:46:03 +0000 (0:00:02.357) 0:00:47.178 *********** 2025-06-02 17:48:09.625140 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:48:09.625157 | orchestrator | 2025-06-02 17:48:09.625167 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-06-02 17:48:09.625177 | orchestrator | Monday 02 June 2025 17:46:05 +0000 (0:00:02.607) 0:00:49.785 *********** 2025-06-02 17:48:09.625186 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:48:09.625196 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:48:09.625206 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:48:09.625216 | orchestrator | 2025-06-02 17:48:09.625226 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-06-02 17:48:09.625236 | orchestrator | Monday 02 June 2025 17:46:06 +0000 (0:00:00.842) 0:00:50.627 *********** 2025-06-02 17:48:09.625246 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:48:09.625256 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:48:09.625265 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:48:09.625275 | orchestrator | 2025-06-02 17:48:09.625285 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-06-02 17:48:09.625295 | orchestrator | Monday 02 June 2025 17:46:07 +0000 (0:00:00.312) 0:00:50.940 *********** 2025-06-02 17:48:09.625305 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:48:09.625315 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:48:09.625324 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:48:09.625334 | orchestrator | 2025-06-02 17:48:09.625344 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-06-02 17:48:09.625354 | orchestrator | Monday 02 June 2025 17:46:07 +0000 (0:00:00.345) 0:00:51.286 *********** 2025-06-02 17:48:09.625364 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:48:09.625373 | orchestrator | 2025-06-02 17:48:09.625383 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-06-02 17:48:09.625393 | orchestrator | Monday 02 June 2025 17:46:21 +0000 (0:00:14.035) 0:01:05.321 *********** 2025-06-02 17:48:09.625403 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:48:09.625412 | orchestrator | 2025-06-02 17:48:09.625428 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-02 17:48:09.625438 | orchestrator | Monday 02 June 2025 17:46:31 +0000 (0:00:09.658) 0:01:14.980 *********** 2025-06-02 17:48:09.625448 | orchestrator | 2025-06-02 17:48:09.625458 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-02 17:48:09.625467 | orchestrator | Monday 02 June 2025 17:46:31 +0000 (0:00:00.258) 0:01:15.238 *********** 2025-06-02 17:48:09.625477 | orchestrator | 2025-06-02 17:48:09.625487 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-02 17:48:09.625497 | orchestrator | Monday 02 June 2025 17:46:31 +0000 (0:00:00.064) 0:01:15.302 *********** 2025-06-02 17:48:09.625507 | orchestrator | 2025-06-02 17:48:09.625517 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-06-02 17:48:09.625526 | orchestrator | Monday 02 June 2025 17:46:31 +0000 (0:00:00.062) 0:01:15.365 *********** 2025-06-02 17:48:09.625536 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:48:09.625546 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:48:09.625556 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:48:09.625566 | orchestrator | 2025-06-02 17:48:09.625576 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-06-02 17:48:09.625586 | orchestrator | Monday 02 June 2025 17:46:58 +0000 (0:00:26.710) 0:01:42.076 *********** 2025-06-02 17:48:09.625596 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:48:09.625605 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:48:09.625638 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:48:09.625657 | orchestrator | 2025-06-02 17:48:09.625674 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-06-02 17:48:09.625690 | orchestrator | Monday 02 June 2025 17:47:08 +0000 (0:00:10.663) 0:01:52.740 *********** 2025-06-02 17:48:09.625706 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:48:09.625721 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:48:09.625735 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:48:09.625751 | orchestrator | 2025-06-02 17:48:09.625777 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-02 17:48:09.625794 | orchestrator | Monday 02 June 2025 17:47:20 +0000 (0:00:11.620) 0:02:04.361 *********** 2025-06-02 17:48:09.625808 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:48:09.625823 | orchestrator | 2025-06-02 17:48:09.625839 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-06-02 17:48:09.625854 | orchestrator | Monday 02 June 2025 17:47:21 +0000 (0:00:00.756) 0:02:05.117 *********** 2025-06-02 17:48:09.625870 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:48:09.625885 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:48:09.625901 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:48:09.625918 | orchestrator | 2025-06-02 17:48:09.625935 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-06-02 17:48:09.625952 | orchestrator | Monday 02 June 2025 17:47:21 +0000 (0:00:00.714) 0:02:05.832 *********** 2025-06-02 17:48:09.625968 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:48:09.625979 | orchestrator | 2025-06-02 17:48:09.625988 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-06-02 17:48:09.625998 | orchestrator | Monday 02 June 2025 17:47:23 +0000 (0:00:01.769) 0:02:07.601 *********** 2025-06-02 17:48:09.626007 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-06-02 17:48:09.626069 | orchestrator | 2025-06-02 17:48:09.626080 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-06-02 17:48:09.626090 | orchestrator | Monday 02 June 2025 17:47:34 +0000 (0:00:10.559) 0:02:18.161 *********** 2025-06-02 17:48:09.626100 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-06-02 17:48:09.626110 | orchestrator | 2025-06-02 17:48:09.626120 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-06-02 17:48:09.626130 | orchestrator | Monday 02 June 2025 17:47:56 +0000 (0:00:22.412) 0:02:40.573 *********** 2025-06-02 17:48:09.626139 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-06-02 17:48:09.626156 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-06-02 17:48:09.626166 | orchestrator | 2025-06-02 17:48:09.626176 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-06-02 17:48:09.626186 | orchestrator | Monday 02 June 2025 17:48:03 +0000 (0:00:06.703) 0:02:47.276 *********** 2025-06-02 17:48:09.626195 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:48:09.626205 | orchestrator | 2025-06-02 17:48:09.626214 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-06-02 17:48:09.626224 | orchestrator | Monday 02 June 2025 17:48:03 +0000 (0:00:00.335) 0:02:47.612 *********** 2025-06-02 17:48:09.626234 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:48:09.626243 | orchestrator | 2025-06-02 17:48:09.626253 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-06-02 17:48:09.626263 | orchestrator | Monday 02 June 2025 17:48:03 +0000 (0:00:00.129) 0:02:47.741 *********** 2025-06-02 17:48:09.626272 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:48:09.626282 | orchestrator | 2025-06-02 17:48:09.626292 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-06-02 17:48:09.626302 | orchestrator | Monday 02 June 2025 17:48:03 +0000 (0:00:00.123) 0:02:47.865 *********** 2025-06-02 17:48:09.626311 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:48:09.626321 | orchestrator | 2025-06-02 17:48:09.626333 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-06-02 17:48:09.626348 | orchestrator | Monday 02 June 2025 17:48:04 +0000 (0:00:00.327) 0:02:48.193 *********** 2025-06-02 17:48:09.626365 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:48:09.626381 | orchestrator | 2025-06-02 17:48:09.626397 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-02 17:48:09.626412 | orchestrator | Monday 02 June 2025 17:48:07 +0000 (0:00:03.497) 0:02:51.690 *********** 2025-06-02 17:48:09.626439 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:48:09.626450 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:48:09.626460 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:48:09.626470 | orchestrator | 2025-06-02 17:48:09.626490 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:48:09.626517 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-06-02 17:48:09.626529 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-06-02 17:48:09.626539 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-06-02 17:48:09.626548 | orchestrator | 2025-06-02 17:48:09.626559 | orchestrator | 2025-06-02 17:48:09.626568 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:48:09.626578 | orchestrator | Monday 02 June 2025 17:48:08 +0000 (0:00:00.763) 0:02:52.453 *********** 2025-06-02 17:48:09.626587 | orchestrator | =============================================================================== 2025-06-02 17:48:09.626597 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 26.71s 2025-06-02 17:48:09.626606 | orchestrator | service-ks-register : keystone | Creating services --------------------- 22.41s 2025-06-02 17:48:09.626616 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 14.04s 2025-06-02 17:48:09.626654 | orchestrator | keystone : Restart keystone container ---------------------------------- 11.62s 2025-06-02 17:48:09.626671 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 10.66s 2025-06-02 17:48:09.626685 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 10.56s 2025-06-02 17:48:09.626695 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 9.66s 2025-06-02 17:48:09.626704 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.35s 2025-06-02 17:48:09.626714 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.70s 2025-06-02 17:48:09.626724 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.77s 2025-06-02 17:48:09.626733 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.70s 2025-06-02 17:48:09.626743 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.60s 2025-06-02 17:48:09.626752 | orchestrator | keystone : Creating default user role ----------------------------------- 3.50s 2025-06-02 17:48:09.626762 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.76s 2025-06-02 17:48:09.626771 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.61s 2025-06-02 17:48:09.626781 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.46s 2025-06-02 17:48:09.626791 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.36s 2025-06-02 17:48:09.626801 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.27s 2025-06-02 17:48:09.626810 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 2.03s 2025-06-02 17:48:09.626820 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.78s 2025-06-02 17:48:09.626829 | orchestrator | 2025-06-02 17:48:09 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:48:09.626839 | orchestrator | 2025-06-02 17:48:09 | INFO  | Task 46884b6b-d03f-4914-b1d3-e52570b2033d is in state STARTED 2025-06-02 17:48:09.626855 | orchestrator | 2025-06-02 17:48:09 | INFO  | Task 31bf779f-d013-4b5d-8b81-c1b01b0695d2 is in state STARTED 2025-06-02 17:48:09.626870 | orchestrator | 2025-06-02 17:48:09 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:48:09.626888 | orchestrator | 2025-06-02 17:48:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:48:12.671155 | orchestrator | 2025-06-02 17:48:12 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:48:12.671254 | orchestrator | 2025-06-02 17:48:12 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:48:12.672019 | orchestrator | 2025-06-02 17:48:12 | INFO  | Task 46884b6b-d03f-4914-b1d3-e52570b2033d is in state STARTED 2025-06-02 17:48:12.673530 | orchestrator | 2025-06-02 17:48:12 | INFO  | Task 31bf779f-d013-4b5d-8b81-c1b01b0695d2 is in state STARTED 2025-06-02 17:48:12.673567 | orchestrator | 2025-06-02 17:48:12 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:48:12.673579 | orchestrator | 2025-06-02 17:48:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:48:15.721908 | orchestrator | 2025-06-02 17:48:15 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:48:15.721974 | orchestrator | 2025-06-02 17:48:15 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:48:15.722810 | orchestrator | 2025-06-02 17:48:15 | INFO  | Task 46884b6b-d03f-4914-b1d3-e52570b2033d is in state STARTED 2025-06-02 17:48:15.723418 | orchestrator | 2025-06-02 17:48:15 | INFO  | Task 31bf779f-d013-4b5d-8b81-c1b01b0695d2 is in state STARTED 2025-06-02 17:48:15.724130 | orchestrator | 2025-06-02 17:48:15 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:48:15.724153 | orchestrator | 2025-06-02 17:48:15 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:48:18.754215 | orchestrator | 2025-06-02 17:48:18 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:48:18.754751 | orchestrator | 2025-06-02 17:48:18 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:48:18.756505 | orchestrator | 2025-06-02 17:48:18 | INFO  | Task 46884b6b-d03f-4914-b1d3-e52570b2033d is in state STARTED 2025-06-02 17:48:18.757002 | orchestrator | 2025-06-02 17:48:18 | INFO  | Task 31bf779f-d013-4b5d-8b81-c1b01b0695d2 is in state STARTED 2025-06-02 17:48:18.758954 | orchestrator | 2025-06-02 17:48:18 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:48:18.758997 | orchestrator | 2025-06-02 17:48:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:48:21.794606 | orchestrator | 2025-06-02 17:48:21 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:48:21.794765 | orchestrator | 2025-06-02 17:48:21 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:48:21.795139 | orchestrator | 2025-06-02 17:48:21 | INFO  | Task 46884b6b-d03f-4914-b1d3-e52570b2033d is in state STARTED 2025-06-02 17:48:21.795578 | orchestrator | 2025-06-02 17:48:21 | INFO  | Task 31bf779f-d013-4b5d-8b81-c1b01b0695d2 is in state STARTED 2025-06-02 17:48:21.796365 | orchestrator | 2025-06-02 17:48:21 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:48:21.796443 | orchestrator | 2025-06-02 17:48:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:48:24.821942 | orchestrator | 2025-06-02 17:48:24 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:48:24.822186 | orchestrator | 2025-06-02 17:48:24 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:48:24.822231 | orchestrator | 2025-06-02 17:48:24 | INFO  | Task 46884b6b-d03f-4914-b1d3-e52570b2033d is in state STARTED 2025-06-02 17:48:24.822835 | orchestrator | 2025-06-02 17:48:24 | INFO  | Task 31bf779f-d013-4b5d-8b81-c1b01b0695d2 is in state STARTED 2025-06-02 17:48:24.823616 | orchestrator | 2025-06-02 17:48:24 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:48:24.823671 | orchestrator | 2025-06-02 17:48:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:48:27.857404 | orchestrator | 2025-06-02 17:48:27 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:48:27.857516 | orchestrator | 2025-06-02 17:48:27 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:48:27.857829 | orchestrator | 2025-06-02 17:48:27 | INFO  | Task 46884b6b-d03f-4914-b1d3-e52570b2033d is in state STARTED 2025-06-02 17:48:27.859323 | orchestrator | 2025-06-02 17:48:27 | INFO  | Task 31bf779f-d013-4b5d-8b81-c1b01b0695d2 is in state SUCCESS 2025-06-02 17:48:27.859356 | orchestrator | 2025-06-02 17:48:27 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:48:27.859368 | orchestrator | 2025-06-02 17:48:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:48:30.889276 | orchestrator | 2025-06-02 17:48:30 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:48:30.890435 | orchestrator | 2025-06-02 17:48:30 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:48:30.891373 | orchestrator | 2025-06-02 17:48:30 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:48:30.892068 | orchestrator | 2025-06-02 17:48:30 | INFO  | Task 46884b6b-d03f-4914-b1d3-e52570b2033d is in state STARTED 2025-06-02 17:48:30.893057 | orchestrator | 2025-06-02 17:48:30 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:48:30.893077 | orchestrator | 2025-06-02 17:48:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:48:33.930965 | orchestrator | 2025-06-02 17:48:33 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:48:33.931061 | orchestrator | 2025-06-02 17:48:33 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:48:33.931946 | orchestrator | 2025-06-02 17:48:33 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:48:33.931980 | orchestrator | 2025-06-02 17:48:33 | INFO  | Task 46884b6b-d03f-4914-b1d3-e52570b2033d is in state STARTED 2025-06-02 17:48:33.932961 | orchestrator | 2025-06-02 17:48:33 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:48:33.932992 | orchestrator | 2025-06-02 17:48:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:48:36.975045 | orchestrator | 2025-06-02 17:48:36 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:48:36.975203 | orchestrator | 2025-06-02 17:48:36 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:48:36.976172 | orchestrator | 2025-06-02 17:48:36 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:48:36.977138 | orchestrator | 2025-06-02 17:48:36 | INFO  | Task 46884b6b-d03f-4914-b1d3-e52570b2033d is in state STARTED 2025-06-02 17:48:36.977873 | orchestrator | 2025-06-02 17:48:36 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:48:36.977905 | orchestrator | 2025-06-02 17:48:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:48:40.015610 | orchestrator | 2025-06-02 17:48:40 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:48:40.015736 | orchestrator | 2025-06-02 17:48:40 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:48:40.015776 | orchestrator | 2025-06-02 17:48:40 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:48:40.017732 | orchestrator | 2025-06-02 17:48:40 | INFO  | Task 46884b6b-d03f-4914-b1d3-e52570b2033d is in state STARTED 2025-06-02 17:48:40.018350 | orchestrator | 2025-06-02 17:48:40 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:48:40.018386 | orchestrator | 2025-06-02 17:48:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:48:43.061188 | orchestrator | 2025-06-02 17:48:43 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:48:43.061283 | orchestrator | 2025-06-02 17:48:43 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:48:43.061888 | orchestrator | 2025-06-02 17:48:43 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:48:43.062629 | orchestrator | 2025-06-02 17:48:43 | INFO  | Task 46884b6b-d03f-4914-b1d3-e52570b2033d is in state STARTED 2025-06-02 17:48:43.065098 | orchestrator | 2025-06-02 17:48:43 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:48:43.065172 | orchestrator | 2025-06-02 17:48:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:48:46.093853 | orchestrator | 2025-06-02 17:48:46 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:48:46.094003 | orchestrator | 2025-06-02 17:48:46 | INFO  | Task d351270d-3357-4942-b864-4ce536638fa0 is in state STARTED 2025-06-02 17:48:46.096303 | orchestrator | 2025-06-02 17:48:46 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:48:46.096833 | orchestrator | 2025-06-02 17:48:46 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:48:46.100190 | orchestrator | 2025-06-02 17:48:46 | INFO  | Task 46884b6b-d03f-4914-b1d3-e52570b2033d is in state STARTED 2025-06-02 17:48:46.101115 | orchestrator | 2025-06-02 17:48:46 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:48:46.101188 | orchestrator | 2025-06-02 17:48:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:48:49.139249 | orchestrator | 2025-06-02 17:48:49 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:48:49.139405 | orchestrator | 2025-06-02 17:48:49 | INFO  | Task d351270d-3357-4942-b864-4ce536638fa0 is in state STARTED 2025-06-02 17:48:49.140115 | orchestrator | 2025-06-02 17:48:49 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:48:49.140757 | orchestrator | 2025-06-02 17:48:49 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:48:49.141396 | orchestrator | 2025-06-02 17:48:49 | INFO  | Task 46884b6b-d03f-4914-b1d3-e52570b2033d is in state STARTED 2025-06-02 17:48:49.141942 | orchestrator | 2025-06-02 17:48:49 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:48:49.142043 | orchestrator | 2025-06-02 17:48:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:48:52.168605 | orchestrator | 2025-06-02 17:48:52 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:48:52.171275 | orchestrator | 2025-06-02 17:48:52 | INFO  | Task d351270d-3357-4942-b864-4ce536638fa0 is in state STARTED 2025-06-02 17:48:52.171797 | orchestrator | 2025-06-02 17:48:52 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:48:52.172424 | orchestrator | 2025-06-02 17:48:52 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:48:52.173238 | orchestrator | 2025-06-02 17:48:52 | INFO  | Task 46884b6b-d03f-4914-b1d3-e52570b2033d is in state STARTED 2025-06-02 17:48:52.174266 | orchestrator | 2025-06-02 17:48:52 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:48:52.174372 | orchestrator | 2025-06-02 17:48:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:48:55.215039 | orchestrator | 2025-06-02 17:48:55 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:48:55.215239 | orchestrator | 2025-06-02 17:48:55 | INFO  | Task d351270d-3357-4942-b864-4ce536638fa0 is in state STARTED 2025-06-02 17:48:55.215837 | orchestrator | 2025-06-02 17:48:55 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:48:55.217963 | orchestrator | 2025-06-02 17:48:55 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:48:55.218765 | orchestrator | 2025-06-02 17:48:55 | INFO  | Task 46884b6b-d03f-4914-b1d3-e52570b2033d is in state STARTED 2025-06-02 17:48:55.220235 | orchestrator | 2025-06-02 17:48:55 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:48:55.220274 | orchestrator | 2025-06-02 17:48:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:48:58.256534 | orchestrator | 2025-06-02 17:48:58 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:48:58.256643 | orchestrator | 2025-06-02 17:48:58 | INFO  | Task d351270d-3357-4942-b864-4ce536638fa0 is in state STARTED 2025-06-02 17:48:58.256844 | orchestrator | 2025-06-02 17:48:58 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:48:58.257401 | orchestrator | 2025-06-02 17:48:58 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:48:58.258117 | orchestrator | 2025-06-02 17:48:58 | INFO  | Task 46884b6b-d03f-4914-b1d3-e52570b2033d is in state STARTED 2025-06-02 17:48:58.258740 | orchestrator | 2025-06-02 17:48:58 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:48:58.258777 | orchestrator | 2025-06-02 17:48:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:49:01.293176 | orchestrator | 2025-06-02 17:49:01 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:49:01.293287 | orchestrator | 2025-06-02 17:49:01 | INFO  | Task d351270d-3357-4942-b864-4ce536638fa0 is in state SUCCESS 2025-06-02 17:49:01.293855 | orchestrator | 2025-06-02 17:49:01 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:49:01.294482 | orchestrator | 2025-06-02 17:49:01 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:49:01.294905 | orchestrator | 2025-06-02 17:49:01 | INFO  | Task 46884b6b-d03f-4914-b1d3-e52570b2033d is in state STARTED 2025-06-02 17:49:01.295544 | orchestrator | 2025-06-02 17:49:01 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:49:01.295564 | orchestrator | 2025-06-02 17:49:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:49:04.340480 | orchestrator | 2025-06-02 17:49:04 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:49:04.340604 | orchestrator | 2025-06-02 17:49:04 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:49:04.340885 | orchestrator | 2025-06-02 17:49:04 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:49:04.344285 | orchestrator | 2025-06-02 17:49:04 | INFO  | Task 46884b6b-d03f-4914-b1d3-e52570b2033d is in state STARTED 2025-06-02 17:49:04.347087 | orchestrator | 2025-06-02 17:49:04 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:49:04.347130 | orchestrator | 2025-06-02 17:49:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:49:07.386458 | orchestrator | 2025-06-02 17:49:07 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:49:07.386545 | orchestrator | 2025-06-02 17:49:07 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:49:07.387230 | orchestrator | 2025-06-02 17:49:07 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:49:07.387806 | orchestrator | 2025-06-02 17:49:07 | INFO  | Task 46884b6b-d03f-4914-b1d3-e52570b2033d is in state STARTED 2025-06-02 17:49:07.390573 | orchestrator | 2025-06-02 17:49:07 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:49:07.390643 | orchestrator | 2025-06-02 17:49:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:49:10.431787 | orchestrator | 2025-06-02 17:49:10 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:49:10.432051 | orchestrator | 2025-06-02 17:49:10 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:49:10.433089 | orchestrator | 2025-06-02 17:49:10 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:49:10.433433 | orchestrator | 2025-06-02 17:49:10 | INFO  | Task 46884b6b-d03f-4914-b1d3-e52570b2033d is in state SUCCESS 2025-06-02 17:49:10.433967 | orchestrator | 2025-06-02 17:49:10.433999 | orchestrator | 2025-06-02 17:49:10.434007 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 17:49:10.434052 | orchestrator | 2025-06-02 17:49:10.434061 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 17:49:10.434068 | orchestrator | Monday 02 June 2025 17:47:53 +0000 (0:00:00.385) 0:00:00.385 *********** 2025-06-02 17:49:10.434076 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:49:10.434084 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:49:10.434091 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:49:10.434097 | orchestrator | ok: [testbed-manager] 2025-06-02 17:49:10.434104 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:49:10.434111 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:49:10.434117 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:49:10.434124 | orchestrator | 2025-06-02 17:49:10.434132 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 17:49:10.434139 | orchestrator | Monday 02 June 2025 17:47:53 +0000 (0:00:00.810) 0:00:01.196 *********** 2025-06-02 17:49:10.434146 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-06-02 17:49:10.434153 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-06-02 17:49:10.434161 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-06-02 17:49:10.434168 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-06-02 17:49:10.434175 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-06-02 17:49:10.434182 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-06-02 17:49:10.434189 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-06-02 17:49:10.434196 | orchestrator | 2025-06-02 17:49:10.434203 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-06-02 17:49:10.434210 | orchestrator | 2025-06-02 17:49:10.434217 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-06-02 17:49:10.434223 | orchestrator | Monday 02 June 2025 17:47:54 +0000 (0:00:00.818) 0:00:02.015 *********** 2025-06-02 17:49:10.434231 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:49:10.434263 | orchestrator | 2025-06-02 17:49:10.434270 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-06-02 17:49:10.434277 | orchestrator | Monday 02 June 2025 17:47:57 +0000 (0:00:02.718) 0:00:04.733 *********** 2025-06-02 17:49:10.434297 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-06-02 17:49:10.434304 | orchestrator | 2025-06-02 17:49:10.434311 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-06-02 17:49:10.434317 | orchestrator | Monday 02 June 2025 17:48:01 +0000 (0:00:03.787) 0:00:08.521 *********** 2025-06-02 17:49:10.434325 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-06-02 17:49:10.434334 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-06-02 17:49:10.434340 | orchestrator | 2025-06-02 17:49:10.434347 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-06-02 17:49:10.434353 | orchestrator | Monday 02 June 2025 17:48:08 +0000 (0:00:07.688) 0:00:16.209 *********** 2025-06-02 17:49:10.434360 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 17:49:10.434366 | orchestrator | 2025-06-02 17:49:10.434370 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-06-02 17:49:10.434374 | orchestrator | Monday 02 June 2025 17:48:12 +0000 (0:00:03.260) 0:00:19.470 *********** 2025-06-02 17:49:10.434377 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 17:49:10.434381 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-06-02 17:49:10.434385 | orchestrator | 2025-06-02 17:49:10.434389 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-06-02 17:49:10.434392 | orchestrator | Monday 02 June 2025 17:48:16 +0000 (0:00:03.917) 0:00:23.388 *********** 2025-06-02 17:49:10.434396 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 17:49:10.434400 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-06-02 17:49:10.434404 | orchestrator | 2025-06-02 17:49:10.434408 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-06-02 17:49:10.434411 | orchestrator | Monday 02 June 2025 17:48:22 +0000 (0:00:06.438) 0:00:29.826 *********** 2025-06-02 17:49:10.434415 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-06-02 17:49:10.434419 | orchestrator | 2025-06-02 17:49:10.434422 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:49:10.434426 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:49:10.434430 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:49:10.434435 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:49:10.434438 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:49:10.434442 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:49:10.434456 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:49:10.434460 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:49:10.434464 | orchestrator | 2025-06-02 17:49:10.434468 | orchestrator | 2025-06-02 17:49:10.434471 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:49:10.434475 | orchestrator | Monday 02 June 2025 17:48:27 +0000 (0:00:04.817) 0:00:34.644 *********** 2025-06-02 17:49:10.434485 | orchestrator | =============================================================================== 2025-06-02 17:49:10.434489 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 7.69s 2025-06-02 17:49:10.434493 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.44s 2025-06-02 17:49:10.434496 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.82s 2025-06-02 17:49:10.434500 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.92s 2025-06-02 17:49:10.434504 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.79s 2025-06-02 17:49:10.434508 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.26s 2025-06-02 17:49:10.434511 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 2.72s 2025-06-02 17:49:10.434515 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.82s 2025-06-02 17:49:10.434519 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.81s 2025-06-02 17:49:10.434522 | orchestrator | 2025-06-02 17:49:10.434526 | orchestrator | None 2025-06-02 17:49:10.434531 | orchestrator | 2025-06-02 17:49:10.434534 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-06-02 17:49:10.434538 | orchestrator | 2025-06-02 17:49:10.434542 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-06-02 17:49:10.434545 | orchestrator | Monday 02 June 2025 17:47:44 +0000 (0:00:00.290) 0:00:00.290 *********** 2025-06-02 17:49:10.434549 | orchestrator | changed: [testbed-manager] 2025-06-02 17:49:10.434553 | orchestrator | 2025-06-02 17:49:10.434557 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-06-02 17:49:10.434561 | orchestrator | Monday 02 June 2025 17:47:46 +0000 (0:00:01.375) 0:00:01.666 *********** 2025-06-02 17:49:10.434564 | orchestrator | changed: [testbed-manager] 2025-06-02 17:49:10.434568 | orchestrator | 2025-06-02 17:49:10.434572 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-06-02 17:49:10.434578 | orchestrator | Monday 02 June 2025 17:47:47 +0000 (0:00:01.125) 0:00:02.791 *********** 2025-06-02 17:49:10.434582 | orchestrator | changed: [testbed-manager] 2025-06-02 17:49:10.434586 | orchestrator | 2025-06-02 17:49:10.434590 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-06-02 17:49:10.434593 | orchestrator | Monday 02 June 2025 17:47:48 +0000 (0:00:01.024) 0:00:03.816 *********** 2025-06-02 17:49:10.434597 | orchestrator | changed: [testbed-manager] 2025-06-02 17:49:10.434601 | orchestrator | 2025-06-02 17:49:10.434605 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-06-02 17:49:10.434608 | orchestrator | Monday 02 June 2025 17:47:49 +0000 (0:00:01.444) 0:00:05.261 *********** 2025-06-02 17:49:10.434612 | orchestrator | changed: [testbed-manager] 2025-06-02 17:49:10.434616 | orchestrator | 2025-06-02 17:49:10.434619 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-06-02 17:49:10.434623 | orchestrator | Monday 02 June 2025 17:47:51 +0000 (0:00:01.423) 0:00:06.684 *********** 2025-06-02 17:49:10.434627 | orchestrator | changed: [testbed-manager] 2025-06-02 17:49:10.434631 | orchestrator | 2025-06-02 17:49:10.434634 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-06-02 17:49:10.434638 | orchestrator | Monday 02 June 2025 17:47:52 +0000 (0:00:01.076) 0:00:07.760 *********** 2025-06-02 17:49:10.434642 | orchestrator | changed: [testbed-manager] 2025-06-02 17:49:10.434645 | orchestrator | 2025-06-02 17:49:10.434649 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-06-02 17:49:10.434653 | orchestrator | Monday 02 June 2025 17:47:54 +0000 (0:00:02.075) 0:00:09.836 *********** 2025-06-02 17:49:10.434656 | orchestrator | changed: [testbed-manager] 2025-06-02 17:49:10.434660 | orchestrator | 2025-06-02 17:49:10.434664 | orchestrator | TASK [Create admin user] ******************************************************* 2025-06-02 17:49:10.434668 | orchestrator | Monday 02 June 2025 17:47:55 +0000 (0:00:01.209) 0:00:11.046 *********** 2025-06-02 17:49:10.434676 | orchestrator | changed: [testbed-manager] 2025-06-02 17:49:10.434679 | orchestrator | 2025-06-02 17:49:10.434683 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-06-02 17:49:10.434687 | orchestrator | Monday 02 June 2025 17:48:45 +0000 (0:00:49.722) 0:01:00.768 *********** 2025-06-02 17:49:10.434691 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:49:10.434694 | orchestrator | 2025-06-02 17:49:10.434698 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-02 17:49:10.434702 | orchestrator | 2025-06-02 17:49:10.434706 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-02 17:49:10.434751 | orchestrator | Monday 02 June 2025 17:48:45 +0000 (0:00:00.221) 0:01:00.990 *********** 2025-06-02 17:49:10.434758 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:49:10.434763 | orchestrator | 2025-06-02 17:49:10.434769 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-02 17:49:10.434774 | orchestrator | 2025-06-02 17:49:10.434780 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-02 17:49:10.434785 | orchestrator | Monday 02 June 2025 17:48:57 +0000 (0:00:11.592) 0:01:12.583 *********** 2025-06-02 17:49:10.434790 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:49:10.434795 | orchestrator | 2025-06-02 17:49:10.434800 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-02 17:49:10.434806 | orchestrator | 2025-06-02 17:49:10.434811 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-02 17:49:10.434816 | orchestrator | Monday 02 June 2025 17:49:08 +0000 (0:00:11.397) 0:01:23.981 *********** 2025-06-02 17:49:10.434822 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:49:10.434827 | orchestrator | 2025-06-02 17:49:10.434838 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:49:10.434845 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 17:49:10.434852 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:49:10.434858 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:49:10.434864 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:49:10.434871 | orchestrator | 2025-06-02 17:49:10.434876 | orchestrator | 2025-06-02 17:49:10.434879 | orchestrator | 2025-06-02 17:49:10.434883 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:49:10.434887 | orchestrator | Monday 02 June 2025 17:49:09 +0000 (0:00:01.128) 0:01:25.109 *********** 2025-06-02 17:49:10.434890 | orchestrator | =============================================================================== 2025-06-02 17:49:10.434894 | orchestrator | Create admin user ------------------------------------------------------ 49.72s 2025-06-02 17:49:10.434898 | orchestrator | Restart ceph manager service ------------------------------------------- 24.12s 2025-06-02 17:49:10.434902 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.08s 2025-06-02 17:49:10.434905 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.44s 2025-06-02 17:49:10.434909 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.42s 2025-06-02 17:49:10.434915 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.38s 2025-06-02 17:49:10.434921 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.21s 2025-06-02 17:49:10.434927 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.13s 2025-06-02 17:49:10.434932 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.08s 2025-06-02 17:49:10.434942 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.02s 2025-06-02 17:49:10.434953 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.22s 2025-06-02 17:49:10.437286 | orchestrator | 2025-06-02 17:49:10 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:49:10.437326 | orchestrator | 2025-06-02 17:49:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:49:13.475809 | orchestrator | 2025-06-02 17:49:13 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:49:13.476024 | orchestrator | 2025-06-02 17:49:13 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:49:13.477934 | orchestrator | 2025-06-02 17:49:13 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:49:13.478957 | orchestrator | 2025-06-02 17:49:13 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:49:13.479000 | orchestrator | 2025-06-02 17:49:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:49:16.516101 | orchestrator | 2025-06-02 17:49:16 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:49:16.516228 | orchestrator | 2025-06-02 17:49:16 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:49:16.516854 | orchestrator | 2025-06-02 17:49:16 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:49:16.517303 | orchestrator | 2025-06-02 17:49:16 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:49:16.517327 | orchestrator | 2025-06-02 17:49:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:49:19.561891 | orchestrator | 2025-06-02 17:49:19 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:49:19.562097 | orchestrator | 2025-06-02 17:49:19 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:49:19.562194 | orchestrator | 2025-06-02 17:49:19 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:49:19.565110 | orchestrator | 2025-06-02 17:49:19 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:49:19.565180 | orchestrator | 2025-06-02 17:49:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:49:22.593301 | orchestrator | 2025-06-02 17:49:22 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:49:22.593417 | orchestrator | 2025-06-02 17:49:22 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:49:22.593781 | orchestrator | 2025-06-02 17:49:22 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:49:22.594819 | orchestrator | 2025-06-02 17:49:22 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:49:22.594859 | orchestrator | 2025-06-02 17:49:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:49:25.623839 | orchestrator | 2025-06-02 17:49:25 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:49:25.623951 | orchestrator | 2025-06-02 17:49:25 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:49:25.624937 | orchestrator | 2025-06-02 17:49:25 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:49:25.625088 | orchestrator | 2025-06-02 17:49:25 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:49:25.625106 | orchestrator | 2025-06-02 17:49:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:49:28.652930 | orchestrator | 2025-06-02 17:49:28 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:49:28.653945 | orchestrator | 2025-06-02 17:49:28 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:49:28.653985 | orchestrator | 2025-06-02 17:49:28 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:49:28.654083 | orchestrator | 2025-06-02 17:49:28 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:49:28.654110 | orchestrator | 2025-06-02 17:49:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:49:31.676626 | orchestrator | 2025-06-02 17:49:31 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:49:31.676847 | orchestrator | 2025-06-02 17:49:31 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:49:31.677373 | orchestrator | 2025-06-02 17:49:31 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:49:31.677981 | orchestrator | 2025-06-02 17:49:31 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:49:31.678003 | orchestrator | 2025-06-02 17:49:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:49:34.697398 | orchestrator | 2025-06-02 17:49:34 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:49:34.698090 | orchestrator | 2025-06-02 17:49:34 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:49:34.698911 | orchestrator | 2025-06-02 17:49:34 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:49:34.699127 | orchestrator | 2025-06-02 17:49:34 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:49:34.699481 | orchestrator | 2025-06-02 17:49:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:49:37.729701 | orchestrator | 2025-06-02 17:49:37 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:49:37.729839 | orchestrator | 2025-06-02 17:49:37 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:49:37.730420 | orchestrator | 2025-06-02 17:49:37 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:49:37.733080 | orchestrator | 2025-06-02 17:49:37 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:49:37.733126 | orchestrator | 2025-06-02 17:49:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:49:40.769908 | orchestrator | 2025-06-02 17:49:40 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:49:40.770195 | orchestrator | 2025-06-02 17:49:40 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:49:40.771615 | orchestrator | 2025-06-02 17:49:40 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:49:40.773310 | orchestrator | 2025-06-02 17:49:40 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:49:40.773331 | orchestrator | 2025-06-02 17:49:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:49:43.808133 | orchestrator | 2025-06-02 17:49:43 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:49:43.810191 | orchestrator | 2025-06-02 17:49:43 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:49:43.810246 | orchestrator | 2025-06-02 17:49:43 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:49:43.810852 | orchestrator | 2025-06-02 17:49:43 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:49:43.811085 | orchestrator | 2025-06-02 17:49:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:49:46.854338 | orchestrator | 2025-06-02 17:49:46 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:49:46.856396 | orchestrator | 2025-06-02 17:49:46 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:49:46.859019 | orchestrator | 2025-06-02 17:49:46 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:49:46.861894 | orchestrator | 2025-06-02 17:49:46 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:49:46.861939 | orchestrator | 2025-06-02 17:49:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:49:49.909281 | orchestrator | 2025-06-02 17:49:49 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:49:49.911884 | orchestrator | 2025-06-02 17:49:49 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:49:49.914452 | orchestrator | 2025-06-02 17:49:49 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:49:49.917427 | orchestrator | 2025-06-02 17:49:49 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:49:49.917468 | orchestrator | 2025-06-02 17:49:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:49:52.971141 | orchestrator | 2025-06-02 17:49:52 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:49:52.972540 | orchestrator | 2025-06-02 17:49:52 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:49:52.973691 | orchestrator | 2025-06-02 17:49:52 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:49:52.974770 | orchestrator | 2025-06-02 17:49:52 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:49:52.975243 | orchestrator | 2025-06-02 17:49:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:49:56.022252 | orchestrator | 2025-06-02 17:49:56 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:49:56.022831 | orchestrator | 2025-06-02 17:49:56 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:49:56.023963 | orchestrator | 2025-06-02 17:49:56 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:49:56.030635 | orchestrator | 2025-06-02 17:49:56 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:49:56.030694 | orchestrator | 2025-06-02 17:49:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:49:59.076418 | orchestrator | 2025-06-02 17:49:59 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:49:59.077066 | orchestrator | 2025-06-02 17:49:59 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:49:59.078576 | orchestrator | 2025-06-02 17:49:59 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:49:59.080213 | orchestrator | 2025-06-02 17:49:59 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:49:59.080287 | orchestrator | 2025-06-02 17:49:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:50:02.121634 | orchestrator | 2025-06-02 17:50:02 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:50:02.125447 | orchestrator | 2025-06-02 17:50:02 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:50:02.128454 | orchestrator | 2025-06-02 17:50:02 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:50:02.130113 | orchestrator | 2025-06-02 17:50:02 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:50:02.130221 | orchestrator | 2025-06-02 17:50:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:50:05.178286 | orchestrator | 2025-06-02 17:50:05 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:50:05.181313 | orchestrator | 2025-06-02 17:50:05 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:50:05.184531 | orchestrator | 2025-06-02 17:50:05 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:50:05.188025 | orchestrator | 2025-06-02 17:50:05 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:50:05.188920 | orchestrator | 2025-06-02 17:50:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:50:08.225502 | orchestrator | 2025-06-02 17:50:08 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:50:08.225606 | orchestrator | 2025-06-02 17:50:08 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:50:08.228649 | orchestrator | 2025-06-02 17:50:08 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:50:08.228906 | orchestrator | 2025-06-02 17:50:08 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:50:08.229039 | orchestrator | 2025-06-02 17:50:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:50:11.271719 | orchestrator | 2025-06-02 17:50:11 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:50:11.271989 | orchestrator | 2025-06-02 17:50:11 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:50:11.273570 | orchestrator | 2025-06-02 17:50:11 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:50:11.276982 | orchestrator | 2025-06-02 17:50:11 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:50:11.277042 | orchestrator | 2025-06-02 17:50:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:50:14.317723 | orchestrator | 2025-06-02 17:50:14 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:50:14.320224 | orchestrator | 2025-06-02 17:50:14 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:50:14.322650 | orchestrator | 2025-06-02 17:50:14 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:50:14.324233 | orchestrator | 2025-06-02 17:50:14 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:50:14.324455 | orchestrator | 2025-06-02 17:50:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:50:17.367226 | orchestrator | 2025-06-02 17:50:17 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:50:17.367896 | orchestrator | 2025-06-02 17:50:17 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:50:17.369574 | orchestrator | 2025-06-02 17:50:17 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:50:17.370923 | orchestrator | 2025-06-02 17:50:17 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:50:17.370964 | orchestrator | 2025-06-02 17:50:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:50:20.419423 | orchestrator | 2025-06-02 17:50:20 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:50:20.420941 | orchestrator | 2025-06-02 17:50:20 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:50:20.422880 | orchestrator | 2025-06-02 17:50:20 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:50:20.425592 | orchestrator | 2025-06-02 17:50:20 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:50:20.425631 | orchestrator | 2025-06-02 17:50:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:50:23.469700 | orchestrator | 2025-06-02 17:50:23 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:50:23.470236 | orchestrator | 2025-06-02 17:50:23 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:50:23.471212 | orchestrator | 2025-06-02 17:50:23 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:50:23.472313 | orchestrator | 2025-06-02 17:50:23 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:50:23.472363 | orchestrator | 2025-06-02 17:50:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:50:26.514919 | orchestrator | 2025-06-02 17:50:26 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:50:26.515066 | orchestrator | 2025-06-02 17:50:26 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:50:26.515202 | orchestrator | 2025-06-02 17:50:26 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:50:26.518336 | orchestrator | 2025-06-02 17:50:26 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:50:26.518400 | orchestrator | 2025-06-02 17:50:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:50:29.555593 | orchestrator | 2025-06-02 17:50:29 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:50:29.555748 | orchestrator | 2025-06-02 17:50:29 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:50:29.556567 | orchestrator | 2025-06-02 17:50:29 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:50:29.557634 | orchestrator | 2025-06-02 17:50:29 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:50:29.557662 | orchestrator | 2025-06-02 17:50:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:50:32.597098 | orchestrator | 2025-06-02 17:50:32 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:50:32.597186 | orchestrator | 2025-06-02 17:50:32 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:50:32.598350 | orchestrator | 2025-06-02 17:50:32 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:50:32.598474 | orchestrator | 2025-06-02 17:50:32 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:50:32.598549 | orchestrator | 2025-06-02 17:50:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:50:35.644273 | orchestrator | 2025-06-02 17:50:35 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:50:35.645453 | orchestrator | 2025-06-02 17:50:35 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:50:35.647284 | orchestrator | 2025-06-02 17:50:35 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:50:35.649278 | orchestrator | 2025-06-02 17:50:35 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:50:35.649366 | orchestrator | 2025-06-02 17:50:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:50:38.697250 | orchestrator | 2025-06-02 17:50:38 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:50:38.697362 | orchestrator | 2025-06-02 17:50:38 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:50:38.697455 | orchestrator | 2025-06-02 17:50:38 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:50:38.698253 | orchestrator | 2025-06-02 17:50:38 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:50:38.698287 | orchestrator | 2025-06-02 17:50:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:50:41.730455 | orchestrator | 2025-06-02 17:50:41 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:50:41.731225 | orchestrator | 2025-06-02 17:50:41 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:50:41.733638 | orchestrator | 2025-06-02 17:50:41 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:50:41.735019 | orchestrator | 2025-06-02 17:50:41 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:50:41.735054 | orchestrator | 2025-06-02 17:50:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:50:44.782456 | orchestrator | 2025-06-02 17:50:44 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:50:44.782561 | orchestrator | 2025-06-02 17:50:44 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:50:44.787801 | orchestrator | 2025-06-02 17:50:44 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:50:44.791055 | orchestrator | 2025-06-02 17:50:44 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:50:44.791142 | orchestrator | 2025-06-02 17:50:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:50:47.824020 | orchestrator | 2025-06-02 17:50:47 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:50:47.824105 | orchestrator | 2025-06-02 17:50:47 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:50:47.824787 | orchestrator | 2025-06-02 17:50:47 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:50:47.825396 | orchestrator | 2025-06-02 17:50:47 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:50:47.825421 | orchestrator | 2025-06-02 17:50:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:50:50.880624 | orchestrator | 2025-06-02 17:50:50 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:50:50.882139 | orchestrator | 2025-06-02 17:50:50 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:50:50.883180 | orchestrator | 2025-06-02 17:50:50 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:50:50.884963 | orchestrator | 2025-06-02 17:50:50 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:50:50.885002 | orchestrator | 2025-06-02 17:50:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:50:53.929370 | orchestrator | 2025-06-02 17:50:53 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:50:53.932597 | orchestrator | 2025-06-02 17:50:53 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:50:53.936307 | orchestrator | 2025-06-02 17:50:53 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:50:53.938157 | orchestrator | 2025-06-02 17:50:53 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:50:53.938196 | orchestrator | 2025-06-02 17:50:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:50:56.976618 | orchestrator | 2025-06-02 17:50:56 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:50:56.979666 | orchestrator | 2025-06-02 17:50:56 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:50:56.979755 | orchestrator | 2025-06-02 17:50:56 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:50:56.979828 | orchestrator | 2025-06-02 17:50:56 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state STARTED 2025-06-02 17:50:56.980211 | orchestrator | 2025-06-02 17:50:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:51:00.025930 | orchestrator | 2025-06-02 17:51:00 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:51:00.027716 | orchestrator | 2025-06-02 17:51:00 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:51:00.029837 | orchestrator | 2025-06-02 17:51:00 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:51:00.033431 | orchestrator | 2025-06-02 17:51:00.033461 | orchestrator | 2025-06-02 17:51:00.033466 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 17:51:00.033471 | orchestrator | 2025-06-02 17:51:00.033476 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 17:51:00.033481 | orchestrator | Monday 02 June 2025 17:47:53 +0000 (0:00:00.324) 0:00:00.324 *********** 2025-06-02 17:51:00.033486 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:51:00.033494 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:51:00.033500 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:51:00.033506 | orchestrator | 2025-06-02 17:51:00.033512 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 17:51:00.033518 | orchestrator | Monday 02 June 2025 17:47:53 +0000 (0:00:00.378) 0:00:00.703 *********** 2025-06-02 17:51:00.033546 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-06-02 17:51:00.033553 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-06-02 17:51:00.033560 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-06-02 17:51:00.033565 | orchestrator | 2025-06-02 17:51:00.033570 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-06-02 17:51:00.033574 | orchestrator | 2025-06-02 17:51:00.033578 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-02 17:51:00.033582 | orchestrator | Monday 02 June 2025 17:47:54 +0000 (0:00:00.495) 0:00:01.198 *********** 2025-06-02 17:51:00.033586 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:51:00.033591 | orchestrator | 2025-06-02 17:51:00.033595 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-06-02 17:51:00.033599 | orchestrator | Monday 02 June 2025 17:47:54 +0000 (0:00:00.563) 0:00:01.762 *********** 2025-06-02 17:51:00.033602 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-06-02 17:51:00.033606 | orchestrator | 2025-06-02 17:51:00.033610 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-06-02 17:51:00.033613 | orchestrator | Monday 02 June 2025 17:47:59 +0000 (0:00:04.334) 0:00:06.096 *********** 2025-06-02 17:51:00.033618 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-06-02 17:51:00.033622 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-06-02 17:51:00.033626 | orchestrator | 2025-06-02 17:51:00.033630 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-06-02 17:51:00.033653 | orchestrator | Monday 02 June 2025 17:48:05 +0000 (0:00:06.357) 0:00:12.454 *********** 2025-06-02 17:51:00.033657 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-06-02 17:51:00.033661 | orchestrator | 2025-06-02 17:51:00.033664 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-06-02 17:51:00.033668 | orchestrator | Monday 02 June 2025 17:48:08 +0000 (0:00:03.579) 0:00:16.034 *********** 2025-06-02 17:51:00.033673 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 17:51:00.033677 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-06-02 17:51:00.033681 | orchestrator | 2025-06-02 17:51:00.033684 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-06-02 17:51:00.033688 | orchestrator | Monday 02 June 2025 17:48:13 +0000 (0:00:04.048) 0:00:20.082 *********** 2025-06-02 17:51:00.033692 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 17:51:00.033696 | orchestrator | 2025-06-02 17:51:00.033700 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-06-02 17:51:00.033704 | orchestrator | Monday 02 June 2025 17:48:16 +0000 (0:00:03.478) 0:00:23.561 *********** 2025-06-02 17:51:00.033707 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-06-02 17:51:00.033711 | orchestrator | 2025-06-02 17:51:00.033715 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-06-02 17:51:00.033719 | orchestrator | Monday 02 June 2025 17:48:20 +0000 (0:00:04.201) 0:00:27.762 *********** 2025-06-02 17:51:00.033748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 17:51:00.033758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 17:51:00.033774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 17:51:00.033781 | orchestrator | 2025-06-02 17:51:00.033787 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-02 17:51:00.033793 | orchestrator | Monday 02 June 2025 17:48:27 +0000 (0:00:06.513) 0:00:34.276 *********** 2025-06-02 17:51:00.033803 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:51:00.033808 | orchestrator | 2025-06-02 17:51:00.033812 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-06-02 17:51:00.033816 | orchestrator | Monday 02 June 2025 17:48:27 +0000 (0:00:00.511) 0:00:34.787 *********** 2025-06-02 17:51:00.033820 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:51:00.033824 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:51:00.033827 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:51:00.033831 | orchestrator | 2025-06-02 17:51:00.033835 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-06-02 17:51:00.033839 | orchestrator | Monday 02 June 2025 17:48:31 +0000 (0:00:04.003) 0:00:38.790 *********** 2025-06-02 17:51:00.033842 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 17:51:00.033846 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 17:51:00.033900 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 17:51:00.033905 | orchestrator | 2025-06-02 17:51:00.033909 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-06-02 17:51:00.033913 | orchestrator | Monday 02 June 2025 17:48:33 +0000 (0:00:01.530) 0:00:40.321 *********** 2025-06-02 17:51:00.033917 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 17:51:00.033920 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 17:51:00.033924 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 17:51:00.033928 | orchestrator | 2025-06-02 17:51:00.033932 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-06-02 17:51:00.033935 | orchestrator | Monday 02 June 2025 17:48:34 +0000 (0:00:01.036) 0:00:41.357 *********** 2025-06-02 17:51:00.033939 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:51:00.033943 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:51:00.033947 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:51:00.033951 | orchestrator | 2025-06-02 17:51:00.033955 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-06-02 17:51:00.033958 | orchestrator | Monday 02 June 2025 17:48:35 +0000 (0:00:00.740) 0:00:42.098 *********** 2025-06-02 17:51:00.033962 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:51:00.033966 | orchestrator | 2025-06-02 17:51:00.033970 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-06-02 17:51:00.033974 | orchestrator | Monday 02 June 2025 17:48:35 +0000 (0:00:00.142) 0:00:42.241 *********** 2025-06-02 17:51:00.033977 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:51:00.033981 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:51:00.033985 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:51:00.033989 | orchestrator | 2025-06-02 17:51:00.033992 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-02 17:51:00.033996 | orchestrator | Monday 02 June 2025 17:48:35 +0000 (0:00:00.341) 0:00:42.582 *********** 2025-06-02 17:51:00.034000 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:51:00.034004 | orchestrator | 2025-06-02 17:51:00.034007 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-06-02 17:51:00.034011 | orchestrator | Monday 02 June 2025 17:48:36 +0000 (0:00:00.511) 0:00:43.093 *********** 2025-06-02 17:51:00.034043 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 17:51:00.034052 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 17:51:00.034058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 17:51:00.034063 | orchestrator | 2025-06-02 17:51:00.034068 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-06-02 17:51:00.034072 | orchestrator | Monday 02 June 2025 17:48:42 +0000 (0:00:06.178) 0:00:49.272 *********** 2025-06-02 17:51:00.034086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 17:51:00.034091 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:51:00.034096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 17:51:00.034101 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:51:00.034113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 17:51:00.034122 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:51:00.034126 | orchestrator | 2025-06-02 17:51:00.034131 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-06-02 17:51:00.034135 | orchestrator | Monday 02 June 2025 17:48:46 +0000 (0:00:04.000) 0:00:53.272 *********** 2025-06-02 17:51:00.034140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 17:51:00.034146 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:51:00.034156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 17:51:00.034164 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:51:00.034169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 17:51:00.034174 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:51:00.034178 | orchestrator | 2025-06-02 17:51:00.034182 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-06-02 17:51:00.034187 | orchestrator | Monday 02 June 2025 17:48:51 +0000 (0:00:04.937) 0:00:58.210 *********** 2025-06-02 17:51:00.034192 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:51:00.034196 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:51:00.034201 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:51:00.034205 | orchestrator | 2025-06-02 17:51:00.034209 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-06-02 17:51:00.034214 | orchestrator | Monday 02 June 2025 17:48:55 +0000 (0:00:04.797) 0:01:03.007 *********** 2025-06-02 17:51:00.034225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 17:51:00.034234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 17:51:00.034240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 17:51:00.034248 | orchestrator | 2025-06-02 17:51:00.034254 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-06-02 17:51:00.034259 | orchestrator | Monday 02 June 2025 17:49:01 +0000 (0:00:05.198) 0:01:08.206 *********** 2025-06-02 17:51:00.034264 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:51:00.034268 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:51:00.034273 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:51:00.034277 | orchestrator | 2025-06-02 17:51:00.034282 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-06-02 17:51:00.034367 | orchestrator | Monday 02 June 2025 17:49:09 +2025-06-02 17:51:00 | INFO  | Task 1b74f777-abaf-4940-a84c-cb618fca1475 is in state SUCCESS 2025-06-02 17:51:00.034374 | orchestrator | 2025-06-02 17:51:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:51:00.034379 | orchestrator | 0000 (0:00:08.575) 0:01:16.782 *********** 2025-06-02 17:51:00.034384 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:51:00.034388 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:51:00.034393 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:51:00.034398 | orchestrator | 2025-06-02 17:51:00.034402 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-06-02 17:51:00.034406 | orchestrator | Monday 02 June 2025 17:49:16 +0000 (0:00:06.387) 0:01:23.170 *********** 2025-06-02 17:51:00.034410 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:51:00.034413 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:51:00.034417 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:51:00.034421 | orchestrator | 2025-06-02 17:51:00.034424 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-06-02 17:51:00.034428 | orchestrator | Monday 02 June 2025 17:49:21 +0000 (0:00:05.462) 0:01:28.632 *********** 2025-06-02 17:51:00.034432 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:51:00.034436 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:51:00.034439 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:51:00.034443 | orchestrator | 2025-06-02 17:51:00.034447 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-06-02 17:51:00.034451 | orchestrator | Monday 02 June 2025 17:49:26 +0000 (0:00:04.565) 0:01:33.197 *********** 2025-06-02 17:51:00.034454 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:51:00.034458 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:51:00.034463 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:51:00.034469 | orchestrator | 2025-06-02 17:51:00.034475 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-06-02 17:51:00.034481 | orchestrator | Monday 02 June 2025 17:49:32 +0000 (0:00:06.294) 0:01:39.492 *********** 2025-06-02 17:51:00.034487 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:51:00.034493 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:51:00.034499 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:51:00.034505 | orchestrator | 2025-06-02 17:51:00.034511 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-06-02 17:51:00.034518 | orchestrator | Monday 02 June 2025 17:49:33 +0000 (0:00:00.603) 0:01:40.095 *********** 2025-06-02 17:51:00.034524 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-02 17:51:00.034535 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:51:00.034539 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-02 17:51:00.034543 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:51:00.034547 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-02 17:51:00.034550 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:51:00.034554 | orchestrator | 2025-06-02 17:51:00.034558 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-06-02 17:51:00.034562 | orchestrator | Monday 02 June 2025 17:49:37 +0000 (0:00:04.162) 0:01:44.258 *********** 2025-06-02 17:51:00.034569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 17:51:00.034577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 17:51:00.034587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 17:51:00.034591 | orchestrator | 2025-06-02 17:51:00.034595 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-02 17:51:00.034598 | orchestrator | Monday 02 June 2025 17:49:41 +0000 (0:00:04.510) 0:01:48.769 *********** 2025-06-02 17:51:00.034602 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:51:00.034606 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:51:00.034610 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:51:00.034613 | orchestrator | 2025-06-02 17:51:00.034617 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-06-02 17:51:00.034621 | orchestrator | Monday 02 June 2025 17:49:42 +0000 (0:00:00.317) 0:01:49.087 *********** 2025-06-02 17:51:00.034627 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:51:00.034631 | orchestrator | 2025-06-02 17:51:00.034635 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-06-02 17:51:00.034638 | orchestrator | Monday 02 June 2025 17:49:44 +0000 (0:00:02.019) 0:01:51.106 *********** 2025-06-02 17:51:00.034642 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:51:00.034646 | orchestrator | 2025-06-02 17:51:00.034650 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-06-02 17:51:00.034655 | orchestrator | Monday 02 June 2025 17:49:46 +0000 (0:00:02.109) 0:01:53.216 *********** 2025-06-02 17:51:00.034659 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:51:00.034663 | orchestrator | 2025-06-02 17:51:00.034667 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-06-02 17:51:00.034671 | orchestrator | Monday 02 June 2025 17:49:48 +0000 (0:00:02.093) 0:01:55.309 *********** 2025-06-02 17:51:00.034675 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:51:00.034679 | orchestrator | 2025-06-02 17:51:00.034682 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-06-02 17:51:00.034686 | orchestrator | Monday 02 June 2025 17:50:17 +0000 (0:00:28.805) 0:02:24.114 *********** 2025-06-02 17:51:00.034690 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:51:00.034694 | orchestrator | 2025-06-02 17:51:00.034697 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-02 17:51:00.034704 | orchestrator | Monday 02 June 2025 17:50:19 +0000 (0:00:02.430) 0:02:26.544 *********** 2025-06-02 17:51:00.034708 | orchestrator | 2025-06-02 17:51:00.034712 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-02 17:51:00.034716 | orchestrator | Monday 02 June 2025 17:50:19 +0000 (0:00:00.064) 0:02:26.609 *********** 2025-06-02 17:51:00.034719 | orchestrator | 2025-06-02 17:51:00.034723 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-02 17:51:00.034727 | orchestrator | Monday 02 June 2025 17:50:19 +0000 (0:00:00.064) 0:02:26.673 *********** 2025-06-02 17:51:00.034731 | orchestrator | 2025-06-02 17:51:00.034734 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-06-02 17:51:00.034738 | orchestrator | Monday 02 June 2025 17:50:19 +0000 (0:00:00.066) 0:02:26.739 *********** 2025-06-02 17:51:00.034742 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:51:00.034746 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:51:00.034749 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:51:00.034753 | orchestrator | 2025-06-02 17:51:00.034757 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:51:00.034761 | orchestrator | testbed-node-0 : ok=26  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-02 17:51:00.034767 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-02 17:51:00.034770 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-02 17:51:00.034774 | orchestrator | 2025-06-02 17:51:00.034778 | orchestrator | 2025-06-02 17:51:00.034782 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:51:00.034786 | orchestrator | Monday 02 June 2025 17:50:58 +0000 (0:00:39.304) 0:03:06.044 *********** 2025-06-02 17:51:00.034789 | orchestrator | =============================================================================== 2025-06-02 17:51:00.034793 | orchestrator | glance : Restart glance-api container ---------------------------------- 39.31s 2025-06-02 17:51:00.034797 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 28.81s 2025-06-02 17:51:00.034801 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 8.58s 2025-06-02 17:51:00.034805 | orchestrator | glance : Ensuring config directories exist ------------------------------ 6.51s 2025-06-02 17:51:00.034809 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 6.39s 2025-06-02 17:51:00.034812 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.36s 2025-06-02 17:51:00.034816 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 6.29s 2025-06-02 17:51:00.034820 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 6.18s 2025-06-02 17:51:00.034824 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 5.46s 2025-06-02 17:51:00.034828 | orchestrator | glance : Copying over config.json files for services -------------------- 5.20s 2025-06-02 17:51:00.034831 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 4.94s 2025-06-02 17:51:00.034835 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 4.80s 2025-06-02 17:51:00.034839 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 4.57s 2025-06-02 17:51:00.034843 | orchestrator | glance : Check glance containers ---------------------------------------- 4.51s 2025-06-02 17:51:00.034846 | orchestrator | service-ks-register : glance | Creating services ------------------------ 4.33s 2025-06-02 17:51:00.034850 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.20s 2025-06-02 17:51:00.034854 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 4.16s 2025-06-02 17:51:00.034878 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.05s 2025-06-02 17:51:00.034882 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 4.00s 2025-06-02 17:51:00.034886 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 4.00s 2025-06-02 17:51:03.095529 | orchestrator | 2025-06-02 17:51:03 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:51:03.097307 | orchestrator | 2025-06-02 17:51:03 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:51:03.099169 | orchestrator | 2025-06-02 17:51:03 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:51:03.100805 | orchestrator | 2025-06-02 17:51:03 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:51:03.101096 | orchestrator | 2025-06-02 17:51:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:51:06.146435 | orchestrator | 2025-06-02 17:51:06 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:51:06.146583 | orchestrator | 2025-06-02 17:51:06 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:51:06.148292 | orchestrator | 2025-06-02 17:51:06 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:51:06.148831 | orchestrator | 2025-06-02 17:51:06 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:51:06.148851 | orchestrator | 2025-06-02 17:51:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:51:09.182573 | orchestrator | 2025-06-02 17:51:09 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:51:09.183950 | orchestrator | 2025-06-02 17:51:09 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:51:09.185289 | orchestrator | 2025-06-02 17:51:09 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:51:09.187004 | orchestrator | 2025-06-02 17:51:09 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:51:09.187060 | orchestrator | 2025-06-02 17:51:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:51:12.234068 | orchestrator | 2025-06-02 17:51:12 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:51:12.236476 | orchestrator | 2025-06-02 17:51:12 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:51:12.241508 | orchestrator | 2025-06-02 17:51:12 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state STARTED 2025-06-02 17:51:12.243538 | orchestrator | 2025-06-02 17:51:12 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:51:12.243583 | orchestrator | 2025-06-02 17:51:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:51:15.299321 | orchestrator | 2025-06-02 17:51:15 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:51:15.300785 | orchestrator | 2025-06-02 17:51:15 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:51:15.306979 | orchestrator | 2025-06-02 17:51:15 | INFO  | Task b4760653-3695-4cf2-aef2-2f308e8400d6 is in state SUCCESS 2025-06-02 17:51:15.308022 | orchestrator | 2025-06-02 17:51:15.308050 | orchestrator | 2025-06-02 17:51:15.308056 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 17:51:15.308061 | orchestrator | 2025-06-02 17:51:15.308065 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 17:51:15.308070 | orchestrator | Monday 02 June 2025 17:47:44 +0000 (0:00:00.306) 0:00:00.306 *********** 2025-06-02 17:51:15.308074 | orchestrator | ok: [testbed-manager] 2025-06-02 17:51:15.308127 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:51:15.308133 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:51:15.308138 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:51:15.308141 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:51:15.308145 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:51:15.308149 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:51:15.308153 | orchestrator | 2025-06-02 17:51:15.308157 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 17:51:15.308161 | orchestrator | Monday 02 June 2025 17:47:45 +0000 (0:00:01.088) 0:00:01.395 *********** 2025-06-02 17:51:15.308165 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-06-02 17:51:15.308170 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-06-02 17:51:15.308174 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-06-02 17:51:15.308177 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-06-02 17:51:15.308181 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-06-02 17:51:15.308185 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-06-02 17:51:15.308189 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-06-02 17:51:15.308192 | orchestrator | 2025-06-02 17:51:15.308196 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-06-02 17:51:15.308200 | orchestrator | 2025-06-02 17:51:15.308203 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-06-02 17:51:15.308207 | orchestrator | Monday 02 June 2025 17:47:46 +0000 (0:00:00.895) 0:00:02.291 *********** 2025-06-02 17:51:15.308223 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:51:15.308228 | orchestrator | 2025-06-02 17:51:15.308232 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-06-02 17:51:15.308236 | orchestrator | Monday 02 June 2025 17:47:48 +0000 (0:00:01.699) 0:00:03.990 *********** 2025-06-02 17:51:15.308266 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-02 17:51:15.308275 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:51:15.308280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:51:15.308289 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:51:15.308312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:51:15.308317 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:51:15.308321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:51:15.308328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:51:15.308334 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:51:15.308343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:51:15.308351 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:51:15.308366 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:51:15.308370 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:51:15.308374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:51:15.308383 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-02 17:51:15.308388 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 17:51:15.308408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:51:15.308416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:51:15.308424 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:51:15.308428 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:51:15.308432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:51:15.308470 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:51:15.308476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:51:15.308480 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 17:51:15.308484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:51:15.308494 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 17:51:15.308501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:51:15.308505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:51:15.308509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:51:15.308513 | orchestrator | 2025-06-02 17:51:15.308520 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-06-02 17:51:15.308524 | orchestrator | Monday 02 June 2025 17:47:52 +0000 (0:00:04.464) 0:00:08.455 *********** 2025-06-02 17:51:15.308528 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:51:15.308532 | orchestrator | 2025-06-02 17:51:15.308536 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-06-02 17:51:15.308540 | orchestrator | Monday 02 June 2025 17:47:54 +0000 (0:00:01.582) 0:00:10.038 *********** 2025-06-02 17:51:15.308565 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-02 17:51:15.308574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:51:15.308578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:51:15.308587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:51:15.308592 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:51:15.308596 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:51:15.308602 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:51:15.308606 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:51:15.308613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:51:15.308618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:51:15.308622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:51:15.308630 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:51:15.308635 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:51:15.308639 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:51:15.308646 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:51:15.308650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:51:15.308661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:51:15.308665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:51:15.308670 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 17:51:15.308677 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 17:51:15.308715 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-02 17:51:15.308722 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 17:51:15.308730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:51:15.308734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:51:15.308739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:51:15.308979 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:51:15.309003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:51:15.309010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:51:15.309024 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:51:15.309035 | orchestrator | 2025-06-02 17:51:15.309039 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-06-02 17:51:15.309044 | orchestrator | Monday 02 June 2025 17:48:00 +0000 (0:00:06.272) 0:00:16.310 *********** 2025-06-02 17:51:15.309051 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-02 17:51:15.309058 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 17:51:15.309067 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 17:51:15.309080 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-02 17:51:15.309087 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:51:15.309094 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:51:15.309141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 17:51:15.309153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:51:15.309157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:51:15.309162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 17:51:15.309168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:51:15.309179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 17:51:15.309186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:51:15.309192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:51:15.309207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 17:51:15.309213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:51:15.309220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 17:51:15.309226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:51:15.309231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:51:15.309264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 17:51:15.309272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:51:15.309278 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:51:15.309289 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:51:15.309296 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:51:15.309303 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 17:51:15.309307 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 17:51:15.309311 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 17:51:15.309315 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:51:15.309319 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 17:51:15.309323 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 17:51:15.309331 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 17:51:15.309336 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:51:15.309339 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 17:51:15.309346 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 17:51:15.309353 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 17:51:15.309357 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:51:15.309361 | orchestrator | 2025-06-02 17:51:15.309365 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-06-02 17:51:15.309369 | orchestrator | Monday 02 June 2025 17:48:02 +0000 (0:00:01.583) 0:00:17.894 *********** 2025-06-02 17:51:15.309373 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-02 17:51:15.309377 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 17:51:15.309381 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 17:51:15.309388 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-02 17:51:15.309396 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:51:15.309402 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:51:15.309406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 17:51:15.309410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 17:51:15.309414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:51:15.309418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:51:15.309422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:51:15.309429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 17:51:15.309443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:51:15.309452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 17:51:15.309459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:51:15.309465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:51:15.309473 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:51:15.309481 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:51:15.309489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 17:51:15.309496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:51:15.309507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:51:15.309518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 17:51:15.309525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:51:15.309532 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:51:15.309542 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 17:51:15.309546 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 17:51:15.309550 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 17:51:15.309554 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:51:15.309558 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 17:51:15.309699 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 17:51:15.309717 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 17:51:15.309721 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:51:15.309725 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 17:51:15.309733 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 17:51:15.309737 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 17:51:15.309741 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:51:15.309745 | orchestrator | 2025-06-02 17:51:15.309748 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-06-02 17:51:15.309752 | orchestrator | Monday 02 June 2025 17:48:04 +0000 (0:00:01.971) 0:00:19.866 *********** 2025-06-02 17:51:15.309756 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-02 17:51:15.309760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:51:15.309772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:51:15.309776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:51:15.309780 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:51:15.309786 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:51:15.309790 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:51:15.309794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:51:15.309798 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:51:15.309805 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:51:15.309812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:51:15.309816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:51:15.309820 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:51:15.309826 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:51:15.309831 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:51:15.309834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:51:15.309842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:51:15.309850 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-02 17:51:15.309854 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 17:51:15.309858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:51:15.309862 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 17:51:15.309866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:51:15.309870 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 17:51:15.309973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:51:15.309996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:51:15.310000 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:51:15.310005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:51:15.310011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:51:15.310067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:51:15.310075 | orchestrator | 2025-06-02 17:51:15.310081 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-06-02 17:51:15.310087 | orchestrator | Monday 02 June 2025 17:48:10 +0000 (0:00:06.697) 0:00:26.563 *********** 2025-06-02 17:51:15.310091 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 17:51:15.310095 | orchestrator | 2025-06-02 17:51:15.310099 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-06-02 17:51:15.310107 | orchestrator | Monday 02 June 2025 17:48:11 +0000 (0:00:00.900) 0:00:27.463 *********** 2025-06-02 17:51:15.310111 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1076282, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7520535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.310115 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1076282, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7520535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.310124 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1076282, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7520535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.310128 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1076282, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7520535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.310132 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1076282, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7520535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:51:15.310139 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1076271, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7480536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.310143 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1076271, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7480536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.310150 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1076282, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7520535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.310154 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1076271, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7480536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.310162 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1076271, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7480536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.310227 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1076271, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7480536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.310233 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1076249, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7430534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.310240 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1076282, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7520535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.310244 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1076249, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7430534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.310252 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1076271, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7480536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:51:15.310257 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1076249, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7430534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.310268 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1076249, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7430534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.310275 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1076249, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7430534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.310281 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1076271, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7480536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.310290 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1076251, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7430534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.310300 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1076251, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7430534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.310307 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1076251, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7430534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.310311 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1076251, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7430534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.310665 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1076265, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7470534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.310684 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1076251, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7430534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.310689 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1076265, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7470534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.310699 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1076249, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7430534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.310709 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1076265, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7470534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.310713 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1076249, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7430534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:51:15.310717 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1076265, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7470534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.310726 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1076256, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7450533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.310730 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1076256, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7450533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.310734 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1076264, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7470534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.310741 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1076265, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7470534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.310749 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1076256, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7450533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.310753 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1076251, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7430534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.310757 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1076256, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7450533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.310763 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1076272, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7490535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.310767 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1076256, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7450533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.310771 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1076279, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7520535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.310779 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1076264, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7470534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.310788 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1076264, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7470534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.310792 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1076265, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7470534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.310796 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1076264, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7470534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.310802 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1076251, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7430534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:51:15.310806 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1076256, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7450533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.310810 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1076264, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7470534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.310820 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1076272, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7490535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.310824 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1076264, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7470534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.310828 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1076303, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7560534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.310832 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1076272, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7490535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.310836 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1076272, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7490535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.310844 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1076272, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7490535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.310850 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1076279, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7520535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311017 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1076272, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7490535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311033 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1076279, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7520535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311037 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1076275, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7500534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311041 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1076279, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7520535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311045 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1076303, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7560534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311055 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1076265, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7470534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:51:15.311059 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1076279, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7520535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311111 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1076279, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7520535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311115 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1076254, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7430534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311119 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1076303, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7560534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311123 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1076303, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7560534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311127 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1076275, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7500534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311134 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1076303, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7560534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311142 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1076303, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7560534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311148 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1076263, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7460535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311152 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1076256, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7450533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:51:15.311156 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1076275, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7500534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311160 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1076254, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7430534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311164 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1076275, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7500534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311172 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1076247, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7420533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311179 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1076275, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7500534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311185 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1076254, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7430534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311189 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1076275, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7500534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311193 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1076263, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7460535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311197 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1076263, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7460535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311201 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1076254, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7430534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311210 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1076254, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7430534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311217 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1076254, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7430534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311223 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1076264, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7470534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:51:15.311227 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1076267, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7480536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311231 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1076247, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7420533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311235 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1076247, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7420533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311239 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1076263, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7460535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311246 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1076263, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7460535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311253 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1076263, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7460535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311259 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1076300, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7560534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311263 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1076247, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7420533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311267 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1076267, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7480536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311271 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1076267, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7480536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311275 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1076247, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7420533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311292 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1076247, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7420533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311297 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1076267, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7480536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311304 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1076272, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7490535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:51:15.311308 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1076267, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7480536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311313 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1076300, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7560534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311317 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1076300, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7560534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311322 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1076262, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7450533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311334 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1076300, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7560534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311338 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1076267, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7480536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311345 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1076284, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7530534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311350 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1076300, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7560534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311354 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:51:15.311359 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1076262, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7450533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311364 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1076262, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7450533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311368 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1076262, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7450533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311381 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1076262, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7450533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311385 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1076279, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7520535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:51:15.311392 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1076300, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7560534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311397 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1076284, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7530534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311401 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:51:15.311406 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1076284, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7530534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311410 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:51:15.311415 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1076284, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7530534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311422 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:51:15.311426 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1076284, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7530534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311431 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:51:15.311439 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1076262, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7450533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311443 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1076284, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7530534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:51:15.311448 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:51:15.311455 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1076303, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7560534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:51:15.311460 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1076275, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7500534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:51:15.311464 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1076254, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7430534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:51:15.311472 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1076263, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7460535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:51:15.311476 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1076247, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7420533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:51:15.311483 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1076267, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7480536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:51:15.311488 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1076300, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7560534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:51:15.311494 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1076262, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7450533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:51:15.311499 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1076284, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7530534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:51:15.311503 | orchestrator | 2025-06-02 17:51:15.311508 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-06-02 17:51:15.311512 | orchestrator | Monday 02 June 2025 17:48:35 +0000 (0:00:23.875) 0:00:51.339 *********** 2025-06-02 17:51:15.311517 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 17:51:15.311521 | orchestrator | 2025-06-02 17:51:15.311525 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-06-02 17:51:15.311533 | orchestrator | Monday 02 June 2025 17:48:36 +0000 (0:00:00.731) 0:00:52.070 *********** 2025-06-02 17:51:15.311537 | orchestrator | [WARNING]: Skipped 2025-06-02 17:51:15.311542 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 17:51:15.311546 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-06-02 17:51:15.311551 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 17:51:15.311555 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-06-02 17:51:15.311560 | orchestrator | [WARNING]: Skipped 2025-06-02 17:51:15.311564 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 17:51:15.311568 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-06-02 17:51:15.311572 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 17:51:15.311577 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-06-02 17:51:15.311581 | orchestrator | [WARNING]: Skipped 2025-06-02 17:51:15.311585 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 17:51:15.311588 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-06-02 17:51:15.311592 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 17:51:15.311596 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-06-02 17:51:15.311600 | orchestrator | [WARNING]: Skipped 2025-06-02 17:51:15.311603 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 17:51:15.311607 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-06-02 17:51:15.311611 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 17:51:15.311615 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-06-02 17:51:15.311618 | orchestrator | [WARNING]: Skipped 2025-06-02 17:51:15.311622 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 17:51:15.311628 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-06-02 17:51:15.311632 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 17:51:15.311636 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-06-02 17:51:15.311640 | orchestrator | [WARNING]: Skipped 2025-06-02 17:51:15.311643 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 17:51:15.311647 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-06-02 17:51:15.311651 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 17:51:15.311655 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-06-02 17:51:15.311658 | orchestrator | [WARNING]: Skipped 2025-06-02 17:51:15.311662 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 17:51:15.311666 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-06-02 17:51:15.311670 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 17:51:15.311673 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-06-02 17:51:15.311677 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 17:51:15.311681 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 17:51:15.311685 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-02 17:51:15.311688 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-02 17:51:15.311692 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-02 17:51:15.311696 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-02 17:51:15.311700 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-02 17:51:15.311703 | orchestrator | 2025-06-02 17:51:15.311707 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-06-02 17:51:15.311715 | orchestrator | Monday 02 June 2025 17:48:39 +0000 (0:00:02.734) 0:00:54.805 *********** 2025-06-02 17:51:15.311718 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-02 17:51:15.311723 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:51:15.311729 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-02 17:51:15.311733 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-02 17:51:15.311737 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-02 17:51:15.311741 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-02 17:51:15.311744 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:51:15.311748 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:51:15.311752 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:51:15.311756 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:51:15.311759 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-02 17:51:15.311763 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:51:15.311767 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-06-02 17:51:15.311770 | orchestrator | 2025-06-02 17:51:15.311774 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-06-02 17:51:15.311778 | orchestrator | Monday 02 June 2025 17:49:01 +0000 (0:00:22.075) 0:01:16.881 *********** 2025-06-02 17:51:15.311782 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-02 17:51:15.311785 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:51:15.311789 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-02 17:51:15.311793 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:51:15.311796 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-02 17:51:15.311800 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:51:15.311804 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-02 17:51:15.311808 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:51:15.311811 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-02 17:51:15.311815 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:51:15.311819 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-02 17:51:15.311823 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:51:15.311826 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-06-02 17:51:15.311830 | orchestrator | 2025-06-02 17:51:15.311834 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-06-02 17:51:15.311838 | orchestrator | Monday 02 June 2025 17:49:05 +0000 (0:00:04.247) 0:01:21.128 *********** 2025-06-02 17:51:15.311841 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-02 17:51:15.311846 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:51:15.311850 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-02 17:51:15.311854 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:51:15.311857 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-06-02 17:51:15.312066 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-02 17:51:15.312080 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:51:15.312084 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-02 17:51:15.312088 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:51:15.312091 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-02 17:51:15.312095 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:51:15.312099 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-02 17:51:15.312103 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:51:15.312106 | orchestrator | 2025-06-02 17:51:15.312110 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-06-02 17:51:15.312114 | orchestrator | Monday 02 June 2025 17:49:08 +0000 (0:00:02.783) 0:01:23.912 *********** 2025-06-02 17:51:15.312118 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 17:51:15.312121 | orchestrator | 2025-06-02 17:51:15.312125 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-06-02 17:51:15.312129 | orchestrator | Monday 02 June 2025 17:49:09 +0000 (0:00:00.885) 0:01:24.797 *********** 2025-06-02 17:51:15.312133 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:51:15.312136 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:51:15.312140 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:51:15.312144 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:51:15.312147 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:51:15.312151 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:51:15.312155 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:51:15.312158 | orchestrator | 2025-06-02 17:51:15.312162 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-06-02 17:51:15.312166 | orchestrator | Monday 02 June 2025 17:49:10 +0000 (0:00:01.157) 0:01:25.955 *********** 2025-06-02 17:51:15.312173 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:51:15.312177 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:51:15.312181 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:51:15.312184 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:51:15.312188 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:51:15.312191 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:51:15.312195 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:51:15.312199 | orchestrator | 2025-06-02 17:51:15.312203 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-06-02 17:51:15.312206 | orchestrator | Monday 02 June 2025 17:49:13 +0000 (0:00:03.229) 0:01:29.185 *********** 2025-06-02 17:51:15.312210 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 17:51:15.312214 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 17:51:15.312218 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 17:51:15.312222 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:51:15.312225 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:51:15.312229 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:51:15.312233 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 17:51:15.312237 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:51:15.312240 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 17:51:15.312244 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:51:15.312248 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 17:51:15.312251 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:51:15.312255 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 17:51:15.312263 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:51:15.312267 | orchestrator | 2025-06-02 17:51:15.312271 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-06-02 17:51:15.312274 | orchestrator | Monday 02 June 2025 17:49:15 +0000 (0:00:02.436) 0:01:31.621 *********** 2025-06-02 17:51:15.312278 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-02 17:51:15.312282 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-02 17:51:15.312286 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:51:15.312289 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:51:15.312293 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-02 17:51:15.312297 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:51:15.312301 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-06-02 17:51:15.312304 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-02 17:51:15.312308 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:51:15.312312 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-02 17:51:15.312315 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:51:15.312319 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-02 17:51:15.312326 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:51:15.312330 | orchestrator | 2025-06-02 17:51:15.312333 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-06-02 17:51:15.312337 | orchestrator | Monday 02 June 2025 17:49:18 +0000 (0:00:02.150) 0:01:33.772 *********** 2025-06-02 17:51:15.312341 | orchestrator | [WARNING]: Skipped 2025-06-02 17:51:15.312345 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-06-02 17:51:15.312348 | orchestrator | due to this access issue: 2025-06-02 17:51:15.312352 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-06-02 17:51:15.312356 | orchestrator | not a directory 2025-06-02 17:51:15.312360 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 17:51:15.312363 | orchestrator | 2025-06-02 17:51:15.312367 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-06-02 17:51:15.312371 | orchestrator | Monday 02 June 2025 17:49:19 +0000 (0:00:01.581) 0:01:35.354 *********** 2025-06-02 17:51:15.312375 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:51:15.312378 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:51:15.312382 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:51:15.312386 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:51:15.312389 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:51:15.312393 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:51:15.312397 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:51:15.312400 | orchestrator | 2025-06-02 17:51:15.312404 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-06-02 17:51:15.312408 | orchestrator | Monday 02 June 2025 17:49:21 +0000 (0:00:01.369) 0:01:36.724 *********** 2025-06-02 17:51:15.312411 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:51:15.312415 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:51:15.312419 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:51:15.312422 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:51:15.312426 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:51:15.312430 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:51:15.312433 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:51:15.312437 | orchestrator | 2025-06-02 17:51:15.312441 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-06-02 17:51:15.312450 | orchestrator | Monday 02 June 2025 17:49:21 +0000 (0:00:00.787) 0:01:37.511 *********** 2025-06-02 17:51:15.312455 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-02 17:51:15.312461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:51:15.312465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:51:15.312469 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:51:15.312477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:51:15.312481 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:51:15.312485 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:51:15.312494 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:51:15.312499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:51:15.312503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:51:15.312507 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:51:15.312511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:51:15.312517 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:51:15.312521 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:51:15.312530 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:51:15.312534 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 17:51:15.312539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:51:15.312543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:51:15.312547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:51:15.312554 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 17:51:15.312559 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-02 17:51:15.312568 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 17:51:15.312572 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:51:15.312576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:51:15.312580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:51:15.312584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:51:15.312592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:51:15.312596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:51:15.312604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:51:15.312607 | orchestrator | 2025-06-02 17:51:15.312611 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-06-02 17:51:15.312615 | orchestrator | Monday 02 June 2025 17:49:26 +0000 (0:00:04.531) 0:01:42.043 *********** 2025-06-02 17:51:15.312621 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-02 17:51:15.312625 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:51:15.312629 | orchestrator | 2025-06-02 17:51:15.312633 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 17:51:15.312636 | orchestrator | Monday 02 June 2025 17:49:29 +0000 (0:00:02.654) 0:01:44.697 *********** 2025-06-02 17:51:15.312640 | orchestrator | 2025-06-02 17:51:15.312644 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 17:51:15.312648 | orchestrator | Monday 02 June 2025 17:49:29 +0000 (0:00:00.550) 0:01:45.248 *********** 2025-06-02 17:51:15.312651 | orchestrator | 2025-06-02 17:51:15.312655 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 17:51:15.312659 | orchestrator | Monday 02 June 2025 17:49:29 +0000 (0:00:00.131) 0:01:45.380 *********** 2025-06-02 17:51:15.312663 | orchestrator | 2025-06-02 17:51:15.312666 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 17:51:15.312670 | orchestrator | Monday 02 June 2025 17:49:29 +0000 (0:00:00.124) 0:01:45.505 *********** 2025-06-02 17:51:15.312674 | orchestrator | 2025-06-02 17:51:15.312677 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 17:51:15.312681 | orchestrator | Monday 02 June 2025 17:49:29 +0000 (0:00:00.141) 0:01:45.647 *********** 2025-06-02 17:51:15.312685 | orchestrator | 2025-06-02 17:51:15.312689 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 17:51:15.312692 | orchestrator | Monday 02 June 2025 17:49:30 +0000 (0:00:00.130) 0:01:45.777 *********** 2025-06-02 17:51:15.312697 | orchestrator | 2025-06-02 17:51:15.312701 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 17:51:15.312706 | orchestrator | Monday 02 June 2025 17:49:30 +0000 (0:00:00.110) 0:01:45.887 *********** 2025-06-02 17:51:15.312710 | orchestrator | 2025-06-02 17:51:15.312714 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-06-02 17:51:15.312719 | orchestrator | Monday 02 June 2025 17:49:30 +0000 (0:00:00.186) 0:01:46.074 *********** 2025-06-02 17:51:15.312723 | orchestrator | changed: [testbed-manager] 2025-06-02 17:51:15.312727 | orchestrator | 2025-06-02 17:51:15.312732 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-06-02 17:51:15.312736 | orchestrator | Monday 02 June 2025 17:49:52 +0000 (0:00:21.840) 0:02:07.915 *********** 2025-06-02 17:51:15.312740 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:51:15.312744 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:51:15.312749 | orchestrator | changed: [testbed-manager] 2025-06-02 17:51:15.312753 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:51:15.312757 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:51:15.312761 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:51:15.312766 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:51:15.312770 | orchestrator | 2025-06-02 17:51:15.312774 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-06-02 17:51:15.312778 | orchestrator | Monday 02 June 2025 17:50:06 +0000 (0:00:13.931) 0:02:21.847 *********** 2025-06-02 17:51:15.312786 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:51:15.312790 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:51:15.312794 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:51:15.312799 | orchestrator | 2025-06-02 17:51:15.312803 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-06-02 17:51:15.312807 | orchestrator | Monday 02 June 2025 17:50:17 +0000 (0:00:10.872) 0:02:32.719 *********** 2025-06-02 17:51:15.312811 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:51:15.312816 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:51:15.312820 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:51:15.312824 | orchestrator | 2025-06-02 17:51:15.312828 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-06-02 17:51:15.312833 | orchestrator | Monday 02 June 2025 17:50:22 +0000 (0:00:05.522) 0:02:38.242 *********** 2025-06-02 17:51:15.312837 | orchestrator | changed: [testbed-manager] 2025-06-02 17:51:15.312843 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:51:15.312847 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:51:15.312852 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:51:15.312856 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:51:15.312860 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:51:15.312865 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:51:15.312869 | orchestrator | 2025-06-02 17:51:15.312873 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-06-02 17:51:15.312894 | orchestrator | Monday 02 June 2025 17:50:36 +0000 (0:00:14.308) 0:02:52.550 *********** 2025-06-02 17:51:15.312901 | orchestrator | changed: [testbed-manager] 2025-06-02 17:51:15.312908 | orchestrator | 2025-06-02 17:51:15.312914 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-06-02 17:51:15.312920 | orchestrator | Monday 02 June 2025 17:50:45 +0000 (0:00:08.116) 0:03:00.667 *********** 2025-06-02 17:51:15.312925 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:51:15.312931 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:51:15.312937 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:51:15.312943 | orchestrator | 2025-06-02 17:51:15.312956 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-06-02 17:51:15.312962 | orchestrator | Monday 02 June 2025 17:50:56 +0000 (0:00:11.095) 0:03:11.762 *********** 2025-06-02 17:51:15.312968 | orchestrator | changed: [testbed-manager] 2025-06-02 17:51:15.312974 | orchestrator | 2025-06-02 17:51:15.312980 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-06-02 17:51:15.312986 | orchestrator | Monday 02 June 2025 17:51:05 +0000 (0:00:09.820) 0:03:21.583 *********** 2025-06-02 17:51:15.312994 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:51:15.313007 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:51:15.313014 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:51:15.313020 | orchestrator | 2025-06-02 17:51:15.313026 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:51:15.313032 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-02 17:51:15.313043 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-02 17:51:15.313049 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-02 17:51:15.313054 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-02 17:51:15.313058 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-02 17:51:15.313063 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-02 17:51:15.313071 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-02 17:51:15.313075 | orchestrator | 2025-06-02 17:51:15.313079 | orchestrator | 2025-06-02 17:51:15.313083 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:51:15.313086 | orchestrator | Monday 02 June 2025 17:51:12 +0000 (0:00:07.019) 0:03:28.602 *********** 2025-06-02 17:51:15.313090 | orchestrator | =============================================================================== 2025-06-02 17:51:15.313094 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 23.88s 2025-06-02 17:51:15.313099 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 22.08s 2025-06-02 17:51:15.313104 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 21.84s 2025-06-02 17:51:15.313110 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 14.31s 2025-06-02 17:51:15.313120 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 13.93s 2025-06-02 17:51:15.313126 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 11.10s 2025-06-02 17:51:15.313135 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.87s 2025-06-02 17:51:15.313140 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 9.82s 2025-06-02 17:51:15.313146 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 8.12s 2025-06-02 17:51:15.313151 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 7.02s 2025-06-02 17:51:15.313156 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.70s 2025-06-02 17:51:15.313162 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.27s 2025-06-02 17:51:15.313168 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 5.52s 2025-06-02 17:51:15.313173 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.53s 2025-06-02 17:51:15.313178 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 4.46s 2025-06-02 17:51:15.313184 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 4.25s 2025-06-02 17:51:15.313190 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 3.23s 2025-06-02 17:51:15.313195 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.78s 2025-06-02 17:51:15.313206 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.73s 2025-06-02 17:51:15.313212 | orchestrator | prometheus : Creating prometheus database user and setting permissions --- 2.65s 2025-06-02 17:51:15.313217 | orchestrator | 2025-06-02 17:51:15 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:51:15.313222 | orchestrator | 2025-06-02 17:51:15 | INFO  | Task 4ec448f7-8d0b-4ba1-8e1f-f3c144ca7460 is in state STARTED 2025-06-02 17:51:15.313228 | orchestrator | 2025-06-02 17:51:15 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:51:18.369105 | orchestrator | 2025-06-02 17:51:18 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:51:18.371121 | orchestrator | 2025-06-02 17:51:18 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:51:18.373423 | orchestrator | 2025-06-02 17:51:18 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:51:18.375230 | orchestrator | 2025-06-02 17:51:18 | INFO  | Task 4ec448f7-8d0b-4ba1-8e1f-f3c144ca7460 is in state STARTED 2025-06-02 17:51:18.375300 | orchestrator | 2025-06-02 17:51:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:51:21.425412 | orchestrator | 2025-06-02 17:51:21 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:51:21.427842 | orchestrator | 2025-06-02 17:51:21 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:51:21.430426 | orchestrator | 2025-06-02 17:51:21 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:51:21.430517 | orchestrator | 2025-06-02 17:51:21 | INFO  | Task 4ec448f7-8d0b-4ba1-8e1f-f3c144ca7460 is in state STARTED 2025-06-02 17:51:21.430538 | orchestrator | 2025-06-02 17:51:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:51:24.474881 | orchestrator | 2025-06-02 17:51:24 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:51:24.476635 | orchestrator | 2025-06-02 17:51:24 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:51:24.478989 | orchestrator | 2025-06-02 17:51:24 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:51:24.481102 | orchestrator | 2025-06-02 17:51:24 | INFO  | Task 4ec448f7-8d0b-4ba1-8e1f-f3c144ca7460 is in state STARTED 2025-06-02 17:51:24.481191 | orchestrator | 2025-06-02 17:51:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:51:27.524411 | orchestrator | 2025-06-02 17:51:27 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:51:27.526325 | orchestrator | 2025-06-02 17:51:27 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:51:27.528637 | orchestrator | 2025-06-02 17:51:27 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:51:27.531840 | orchestrator | 2025-06-02 17:51:27 | INFO  | Task 4ec448f7-8d0b-4ba1-8e1f-f3c144ca7460 is in state STARTED 2025-06-02 17:51:27.531924 | orchestrator | 2025-06-02 17:51:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:51:30.577730 | orchestrator | 2025-06-02 17:51:30 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:51:30.579993 | orchestrator | 2025-06-02 17:51:30 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:51:30.582763 | orchestrator | 2025-06-02 17:51:30 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:51:30.585951 | orchestrator | 2025-06-02 17:51:30 | INFO  | Task 4ec448f7-8d0b-4ba1-8e1f-f3c144ca7460 is in state STARTED 2025-06-02 17:51:30.585992 | orchestrator | 2025-06-02 17:51:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:51:33.629223 | orchestrator | 2025-06-02 17:51:33 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:51:33.630125 | orchestrator | 2025-06-02 17:51:33 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:51:33.631483 | orchestrator | 2025-06-02 17:51:33 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:51:33.632463 | orchestrator | 2025-06-02 17:51:33 | INFO  | Task 4ec448f7-8d0b-4ba1-8e1f-f3c144ca7460 is in state STARTED 2025-06-02 17:51:33.632516 | orchestrator | 2025-06-02 17:51:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:51:36.677351 | orchestrator | 2025-06-02 17:51:36 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:51:36.679542 | orchestrator | 2025-06-02 17:51:36 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:51:36.685079 | orchestrator | 2025-06-02 17:51:36 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:51:36.687958 | orchestrator | 2025-06-02 17:51:36 | INFO  | Task 4ec448f7-8d0b-4ba1-8e1f-f3c144ca7460 is in state STARTED 2025-06-02 17:51:36.688066 | orchestrator | 2025-06-02 17:51:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:51:39.743150 | orchestrator | 2025-06-02 17:51:39 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:51:39.744569 | orchestrator | 2025-06-02 17:51:39 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:51:39.746176 | orchestrator | 2025-06-02 17:51:39 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:51:39.747743 | orchestrator | 2025-06-02 17:51:39 | INFO  | Task 4ec448f7-8d0b-4ba1-8e1f-f3c144ca7460 is in state STARTED 2025-06-02 17:51:39.747838 | orchestrator | 2025-06-02 17:51:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:51:42.785740 | orchestrator | 2025-06-02 17:51:42 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:51:42.786196 | orchestrator | 2025-06-02 17:51:42 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:51:42.787058 | orchestrator | 2025-06-02 17:51:42 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:51:42.788682 | orchestrator | 2025-06-02 17:51:42 | INFO  | Task 4ec448f7-8d0b-4ba1-8e1f-f3c144ca7460 is in state STARTED 2025-06-02 17:51:42.788731 | orchestrator | 2025-06-02 17:51:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:51:45.827802 | orchestrator | 2025-06-02 17:51:45 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:51:45.828124 | orchestrator | 2025-06-02 17:51:45 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:51:45.829096 | orchestrator | 2025-06-02 17:51:45 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:51:45.830753 | orchestrator | 2025-06-02 17:51:45 | INFO  | Task 4ec448f7-8d0b-4ba1-8e1f-f3c144ca7460 is in state STARTED 2025-06-02 17:51:45.830792 | orchestrator | 2025-06-02 17:51:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:51:48.867068 | orchestrator | 2025-06-02 17:51:48 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:51:48.867231 | orchestrator | 2025-06-02 17:51:48 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:51:48.868445 | orchestrator | 2025-06-02 17:51:48 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:51:48.869675 | orchestrator | 2025-06-02 17:51:48 | INFO  | Task 4ec448f7-8d0b-4ba1-8e1f-f3c144ca7460 is in state STARTED 2025-06-02 17:51:48.869703 | orchestrator | 2025-06-02 17:51:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:51:51.905438 | orchestrator | 2025-06-02 17:51:51 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:51:51.905570 | orchestrator | 2025-06-02 17:51:51 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:51:51.906291 | orchestrator | 2025-06-02 17:51:51 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:51:51.907112 | orchestrator | 2025-06-02 17:51:51 | INFO  | Task 4ec448f7-8d0b-4ba1-8e1f-f3c144ca7460 is in state STARTED 2025-06-02 17:51:51.907160 | orchestrator | 2025-06-02 17:51:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:51:54.952826 | orchestrator | 2025-06-02 17:51:54 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:51:54.953612 | orchestrator | 2025-06-02 17:51:54 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:51:54.954445 | orchestrator | 2025-06-02 17:51:54 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:51:54.955473 | orchestrator | 2025-06-02 17:51:54 | INFO  | Task 4ec448f7-8d0b-4ba1-8e1f-f3c144ca7460 is in state STARTED 2025-06-02 17:51:54.956472 | orchestrator | 2025-06-02 17:51:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:51:57.978595 | orchestrator | 2025-06-02 17:51:57 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:51:57.978770 | orchestrator | 2025-06-02 17:51:57 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:51:57.979534 | orchestrator | 2025-06-02 17:51:57 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:51:57.985468 | orchestrator | 2025-06-02 17:51:57 | INFO  | Task 4ec448f7-8d0b-4ba1-8e1f-f3c144ca7460 is in state STARTED 2025-06-02 17:51:57.985571 | orchestrator | 2025-06-02 17:51:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:52:01.013137 | orchestrator | 2025-06-02 17:52:01 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state STARTED 2025-06-02 17:52:01.014627 | orchestrator | 2025-06-02 17:52:01 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:52:01.016863 | orchestrator | 2025-06-02 17:52:01 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:52:01.016916 | orchestrator | 2025-06-02 17:52:01 | INFO  | Task 4ec448f7-8d0b-4ba1-8e1f-f3c144ca7460 is in state STARTED 2025-06-02 17:52:01.016926 | orchestrator | 2025-06-02 17:52:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:52:04.043885 | orchestrator | 2025-06-02 17:52:04 | INFO  | Task f801a160-37ed-4016-8c7c-8fc6e4411449 is in state SUCCESS 2025-06-02 17:52:04.044927 | orchestrator | 2025-06-02 17:52:04.045061 | orchestrator | 2025-06-02 17:52:04.045074 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 17:52:04.045083 | orchestrator | 2025-06-02 17:52:04.045090 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 17:52:04.045098 | orchestrator | Monday 02 June 2025 17:48:15 +0000 (0:00:00.879) 0:00:00.879 *********** 2025-06-02 17:52:04.045106 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:52:04.045114 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:52:04.045122 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:52:04.045129 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:52:04.045136 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:52:04.045144 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:52:04.045151 | orchestrator | 2025-06-02 17:52:04.045285 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 17:52:04.045300 | orchestrator | Monday 02 June 2025 17:48:17 +0000 (0:00:01.889) 0:00:02.769 *********** 2025-06-02 17:52:04.045309 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-06-02 17:52:04.045317 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-06-02 17:52:04.045326 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-06-02 17:52:04.045335 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-06-02 17:52:04.045343 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-06-02 17:52:04.045351 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-06-02 17:52:04.045359 | orchestrator | 2025-06-02 17:52:04.045368 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-06-02 17:52:04.045376 | orchestrator | 2025-06-02 17:52:04.045383 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-02 17:52:04.045391 | orchestrator | Monday 02 June 2025 17:48:18 +0000 (0:00:01.102) 0:00:03.872 *********** 2025-06-02 17:52:04.045447 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:52:04.045481 | orchestrator | 2025-06-02 17:52:04.045490 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-06-02 17:52:04.045497 | orchestrator | Monday 02 June 2025 17:48:20 +0000 (0:00:02.819) 0:00:06.692 *********** 2025-06-02 17:52:04.045505 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-06-02 17:52:04.045512 | orchestrator | 2025-06-02 17:52:04.045519 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-06-02 17:52:04.045527 | orchestrator | Monday 02 June 2025 17:48:24 +0000 (0:00:03.503) 0:00:10.196 *********** 2025-06-02 17:52:04.045537 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-06-02 17:52:04.045547 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-06-02 17:52:04.045555 | orchestrator | 2025-06-02 17:52:04.045563 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-06-02 17:52:04.045571 | orchestrator | Monday 02 June 2025 17:48:31 +0000 (0:00:06.802) 0:00:16.998 *********** 2025-06-02 17:52:04.045579 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 17:52:04.045587 | orchestrator | 2025-06-02 17:52:04.045593 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-06-02 17:52:04.045601 | orchestrator | Monday 02 June 2025 17:48:34 +0000 (0:00:02.915) 0:00:19.913 *********** 2025-06-02 17:52:04.045608 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 17:52:04.045614 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-06-02 17:52:04.045622 | orchestrator | 2025-06-02 17:52:04.045629 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-06-02 17:52:04.045635 | orchestrator | Monday 02 June 2025 17:48:37 +0000 (0:00:03.448) 0:00:23.362 *********** 2025-06-02 17:52:04.045642 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 17:52:04.045650 | orchestrator | 2025-06-02 17:52:04.045657 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-06-02 17:52:04.045664 | orchestrator | Monday 02 June 2025 17:48:41 +0000 (0:00:03.497) 0:00:26.859 *********** 2025-06-02 17:52:04.045671 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-06-02 17:52:04.045679 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-06-02 17:52:04.045686 | orchestrator | 2025-06-02 17:52:04.045693 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-06-02 17:52:04.045701 | orchestrator | Monday 02 June 2025 17:48:48 +0000 (0:00:07.298) 0:00:34.158 *********** 2025-06-02 17:52:04.045731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 17:52:04.045749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 17:52:04.045767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 17:52:04.045776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:52:04.045784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:52:04.045793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:52:04.045810 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 17:52:04.045830 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 17:52:04.045846 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 17:52:04.045854 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 17:52:04.045864 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 17:52:04.045876 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 17:52:04.045890 | orchestrator | 2025-06-02 17:52:04.045898 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-02 17:52:04.045925 | orchestrator | Monday 02 June 2025 17:48:51 +0000 (0:00:02.951) 0:00:37.109 *********** 2025-06-02 17:52:04.045965 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:52:04.045973 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:52:04.046000 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:52:04.046011 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:52:04.046066 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:52:04.046073 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:52:04.046081 | orchestrator | 2025-06-02 17:52:04.046088 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-02 17:52:04.046095 | orchestrator | Monday 02 June 2025 17:48:52 +0000 (0:00:00.879) 0:00:37.989 *********** 2025-06-02 17:52:04.046102 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:52:04.046109 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:52:04.046117 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:52:04.046125 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:52:04.046132 | orchestrator | 2025-06-02 17:52:04.046139 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-06-02 17:52:04.046146 | orchestrator | Monday 02 June 2025 17:48:53 +0000 (0:00:01.248) 0:00:39.238 *********** 2025-06-02 17:52:04.046154 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-06-02 17:52:04.046162 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-06-02 17:52:04.046169 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-06-02 17:52:04.046177 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-06-02 17:52:04.046184 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-06-02 17:52:04.046191 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-06-02 17:52:04.046199 | orchestrator | 2025-06-02 17:52:04.046205 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-06-02 17:52:04.046212 | orchestrator | Monday 02 June 2025 17:48:55 +0000 (0:00:02.449) 0:00:41.687 *********** 2025-06-02 17:52:04.046220 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-02 17:52:04.046230 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-02 17:52:04.046258 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-02 17:52:04.046270 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-02 17:52:04.046275 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-02 17:52:04.046279 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-02 17:52:04.046284 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-02 17:52:04.046297 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-02 17:52:04.046306 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-02 17:52:04.046311 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-02 17:52:04.046317 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-02 17:52:04.046321 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-02 17:52:04.046329 | orchestrator | 2025-06-02 17:52:04.046334 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-06-02 17:52:04.046338 | orchestrator | Monday 02 June 2025 17:49:00 +0000 (0:00:04.376) 0:00:46.064 *********** 2025-06-02 17:52:04.046343 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 17:52:04.046348 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 17:52:04.046353 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 17:52:04.046357 | orchestrator | 2025-06-02 17:52:04.046361 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-06-02 17:52:04.046366 | orchestrator | Monday 02 June 2025 17:49:01 +0000 (0:00:01.633) 0:00:47.698 *********** 2025-06-02 17:52:04.046373 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-06-02 17:52:04.046378 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-06-02 17:52:04.046382 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-06-02 17:52:04.046387 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-06-02 17:52:04.046391 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-06-02 17:52:04.046395 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-06-02 17:52:04.046399 | orchestrator | 2025-06-02 17:52:04.046412 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-06-02 17:52:04.046417 | orchestrator | Monday 02 June 2025 17:49:05 +0000 (0:00:03.899) 0:00:51.597 *********** 2025-06-02 17:52:04.046421 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-06-02 17:52:04.046426 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-06-02 17:52:04.046430 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-06-02 17:52:04.046435 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-06-02 17:52:04.046439 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-06-02 17:52:04.046443 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-06-02 17:52:04.046448 | orchestrator | 2025-06-02 17:52:04.046452 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-06-02 17:52:04.046456 | orchestrator | Monday 02 June 2025 17:49:07 +0000 (0:00:01.678) 0:00:53.275 *********** 2025-06-02 17:52:04.046461 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:52:04.046465 | orchestrator | 2025-06-02 17:52:04.046469 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-06-02 17:52:04.046474 | orchestrator | Monday 02 June 2025 17:49:07 +0000 (0:00:00.258) 0:00:53.534 *********** 2025-06-02 17:52:04.046478 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:52:04.046482 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:52:04.046487 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:52:04.046491 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:52:04.046495 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:52:04.046500 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:52:04.046504 | orchestrator | 2025-06-02 17:52:04.046508 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-02 17:52:04.046513 | orchestrator | Monday 02 June 2025 17:49:08 +0000 (0:00:00.945) 0:00:54.480 *********** 2025-06-02 17:52:04.046518 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:52:04.046527 | orchestrator | 2025-06-02 17:52:04.046532 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-06-02 17:52:04.046536 | orchestrator | Monday 02 June 2025 17:49:09 +0000 (0:00:01.230) 0:00:55.711 *********** 2025-06-02 17:52:04.046541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 17:52:04.046546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 17:52:04.046559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 17:52:04.046564 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 17:52:04.046570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:52:04.046577 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 17:52:04.046582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:52:04.046591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:52:04.046598 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 17:52:04.046603 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 17:52:04.046611 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 17:52:04.046615 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 17:52:04.046620 | orchestrator | 2025-06-02 17:52:04.046624 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-06-02 17:52:04.046629 | orchestrator | Monday 02 June 2025 17:49:14 +0000 (0:00:04.736) 0:01:00.447 *********** 2025-06-02 17:52:04.046637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 17:52:04.046645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:52:04.046650 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:52:04.046654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 17:52:04.046662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:52:04.046667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 17:52:04.046672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:52:04.046676 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:52:04.046688 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 17:52:04.046692 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 17:52:04.046700 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:52:04.046705 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:52:04.046709 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 17:52:04.046714 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 17:52:04.046718 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:52:04.046723 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 17:52:04.046733 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 17:52:04.046740 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:52:04.046745 | orchestrator | 2025-06-02 17:52:04.046749 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-06-02 17:52:04.046754 | orchestrator | Monday 02 June 2025 17:49:16 +0000 (0:00:02.088) 0:01:02.535 *********** 2025-06-02 17:52:04.046758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 17:52:04.046766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:52:04.046770 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:52:04.046775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 17:52:04.046779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:52:04.046791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 17:52:04.046801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:52:04.046806 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:52:04.046810 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:52:04.046815 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 17:52:04.046819 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 17:52:04.046823 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:52:04.046827 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 17:52:04.046837 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 17:52:04.046846 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:52:04.046855 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 17:52:04.046859 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 17:52:04.046863 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:52:04.046867 | orchestrator | 2025-06-02 17:52:04.046872 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-06-02 17:52:04.046876 | orchestrator | Monday 02 June 2025 17:49:19 +0000 (0:00:02.609) 0:01:05.145 *********** 2025-06-02 17:52:04.046880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 17:52:04.046885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 17:52:04.046895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 17:52:04.046903 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 17:52:04.046908 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 17:52:04.046912 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 17:52:04.046917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:52:04.046966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:52:04.046983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:52:04.046991 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 17:52:04.047034 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 17:52:04.047038 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 17:52:04.047043 | orchestrator | 2025-06-02 17:52:04.047047 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-06-02 17:52:04.047051 | orchestrator | Monday 02 June 2025 17:49:22 +0000 (0:00:03.459) 0:01:08.604 *********** 2025-06-02 17:52:04.047055 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-02 17:52:04.047060 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:52:04.047064 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-02 17:52:04.047073 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-02 17:52:04.047078 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:52:04.047082 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-02 17:52:04.047086 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-02 17:52:04.047090 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:52:04.047098 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-02 17:52:04.047102 | orchestrator | 2025-06-02 17:52:04.047106 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-06-02 17:52:04.047110 | orchestrator | Monday 02 June 2025 17:49:25 +0000 (0:00:03.093) 0:01:11.698 *********** 2025-06-02 17:52:04.047118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 17:52:04.047123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 17:52:04.047128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 17:52:04.047132 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 17:52:04.047148 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 17:52:04.047153 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 17:52:04.047158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:52:04.047162 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 17:52:04.047167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:52:04.047174 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 17:52:04.047184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:52:04.047189 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 17:52:04.047193 | orchestrator | 2025-06-02 17:52:04.047198 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-06-02 17:52:04.047202 | orchestrator | Monday 02 June 2025 17:49:35 +0000 (0:00:09.541) 0:01:21.240 *********** 2025-06-02 17:52:04.047206 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:52:04.047211 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:52:04.047215 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:52:04.047219 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:52:04.047223 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:52:04.047227 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:52:04.047232 | orchestrator | 2025-06-02 17:52:04.047236 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-06-02 17:52:04.047240 | orchestrator | Monday 02 June 2025 17:49:38 +0000 (0:00:02.590) 0:01:23.831 *********** 2025-06-02 17:52:04.047245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 17:52:04.047253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:52:04.047260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 17:52:04.047267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:52:04.047272 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:52:04.047276 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:52:04.047280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 17:52:04.047285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:52:04.047289 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:52:04.047293 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 17:52:04.047301 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 17:52:04.047305 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:52:04.047318 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 17:52:04.047326 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 17:52:04.047332 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:52:04.047338 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 17:52:04.047350 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 17:52:04.047357 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:52:04.047363 | orchestrator | 2025-06-02 17:52:04.047369 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-06-02 17:52:04.047376 | orchestrator | Monday 02 June 2025 17:49:39 +0000 (0:00:01.724) 0:01:25.556 *********** 2025-06-02 17:52:04.047382 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:52:04.047389 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:52:04.047395 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:52:04.047401 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:52:04.047408 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:52:04.047415 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:52:04.047422 | orchestrator | 2025-06-02 17:52:04.047427 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-06-02 17:52:04.047431 | orchestrator | Monday 02 June 2025 17:49:40 +0000 (0:00:00.953) 0:01:26.509 *********** 2025-06-02 17:52:04.047443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 17:52:04.047448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 17:52:04.047453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 17:52:04.047461 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 17:52:04.047465 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 17:52:04.047476 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 17:52:04.047481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:52:04.047486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:52:04.047494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:52:04.047498 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 17:52:04.047503 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 17:52:04.047512 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 17:52:04.047517 | orchestrator | 2025-06-02 17:52:04.047521 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-02 17:52:04.047525 | orchestrator | Monday 02 June 2025 17:49:43 +0000 (0:00:02.360) 0:01:28.870 *********** 2025-06-02 17:52:04.047530 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:52:04.047535 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:52:04.047539 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:52:04.047544 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:52:04.047548 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:52:04.047552 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:52:04.047556 | orchestrator | 2025-06-02 17:52:04.047560 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-06-02 17:52:04.047568 | orchestrator | Monday 02 June 2025 17:49:43 +0000 (0:00:00.681) 0:01:29.551 *********** 2025-06-02 17:52:04.047572 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:52:04.047576 | orchestrator | 2025-06-02 17:52:04.047580 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-06-02 17:52:04.047584 | orchestrator | Monday 02 June 2025 17:49:45 +0000 (0:00:02.091) 0:01:31.642 *********** 2025-06-02 17:52:04.047588 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:52:04.047592 | orchestrator | 2025-06-02 17:52:04.047597 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-06-02 17:52:04.047601 | orchestrator | Monday 02 June 2025 17:49:47 +0000 (0:00:02.047) 0:01:33.690 *********** 2025-06-02 17:52:04.047605 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:52:04.047609 | orchestrator | 2025-06-02 17:52:04.047613 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-02 17:52:04.047621 | orchestrator | Monday 02 June 2025 17:50:05 +0000 (0:00:17.608) 0:01:51.298 *********** 2025-06-02 17:52:04.047625 | orchestrator | 2025-06-02 17:52:04.047629 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-02 17:52:04.047633 | orchestrator | Monday 02 June 2025 17:50:05 +0000 (0:00:00.068) 0:01:51.367 *********** 2025-06-02 17:52:04.047637 | orchestrator | 2025-06-02 17:52:04.047641 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-02 17:52:04.047646 | orchestrator | Monday 02 June 2025 17:50:05 +0000 (0:00:00.063) 0:01:51.431 *********** 2025-06-02 17:52:04.047650 | orchestrator | 2025-06-02 17:52:04.047654 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-02 17:52:04.047658 | orchestrator | Monday 02 June 2025 17:50:05 +0000 (0:00:00.064) 0:01:51.496 *********** 2025-06-02 17:52:04.047662 | orchestrator | 2025-06-02 17:52:04.047667 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-02 17:52:04.047671 | orchestrator | Monday 02 June 2025 17:50:05 +0000 (0:00:00.078) 0:01:51.574 *********** 2025-06-02 17:52:04.047677 | orchestrator | 2025-06-02 17:52:04.047685 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-02 17:52:04.047689 | orchestrator | Monday 02 June 2025 17:50:05 +0000 (0:00:00.065) 0:01:51.639 *********** 2025-06-02 17:52:04.047693 | orchestrator | 2025-06-02 17:52:04.047697 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-06-02 17:52:04.047701 | orchestrator | Monday 02 June 2025 17:50:05 +0000 (0:00:00.060) 0:01:51.700 *********** 2025-06-02 17:52:04.047706 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:52:04.047710 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:52:04.047714 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:52:04.047718 | orchestrator | 2025-06-02 17:52:04.047722 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-06-02 17:52:04.047726 | orchestrator | Monday 02 June 2025 17:50:30 +0000 (0:00:24.511) 0:02:16.212 *********** 2025-06-02 17:52:04.047730 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:52:04.047734 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:52:04.047738 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:52:04.047742 | orchestrator | 2025-06-02 17:52:04.047747 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-06-02 17:52:04.047751 | orchestrator | Monday 02 June 2025 17:50:37 +0000 (0:00:06.978) 0:02:23.191 *********** 2025-06-02 17:52:04.047755 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:52:04.047759 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:52:04.047763 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:52:04.047767 | orchestrator | 2025-06-02 17:52:04.047771 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-06-02 17:52:04.047775 | orchestrator | Monday 02 June 2025 17:51:53 +0000 (0:01:16.272) 0:03:39.464 *********** 2025-06-02 17:52:04.047779 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:52:04.047783 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:52:04.047788 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:52:04.047795 | orchestrator | 2025-06-02 17:52:04.047799 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-06-02 17:52:04.047804 | orchestrator | Monday 02 June 2025 17:52:01 +0000 (0:00:07.787) 0:03:47.252 *********** 2025-06-02 17:52:04.047808 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:52:04.047812 | orchestrator | 2025-06-02 17:52:04.047816 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:52:04.047824 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-02 17:52:04.047829 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-02 17:52:04.047833 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-02 17:52:04.047840 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-02 17:52:04.047845 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-02 17:52:04.047849 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-02 17:52:04.047853 | orchestrator | 2025-06-02 17:52:04.047857 | orchestrator | 2025-06-02 17:52:04.047862 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:52:04.047866 | orchestrator | Monday 02 June 2025 17:52:02 +0000 (0:00:01.125) 0:03:48.378 *********** 2025-06-02 17:52:04.047870 | orchestrator | =============================================================================== 2025-06-02 17:52:04.047875 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 76.27s 2025-06-02 17:52:04.047879 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 24.51s 2025-06-02 17:52:04.047883 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 17.61s 2025-06-02 17:52:04.047887 | orchestrator | cinder : Copying over cinder.conf --------------------------------------- 9.54s 2025-06-02 17:52:04.047891 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 7.79s 2025-06-02 17:52:04.047895 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.30s 2025-06-02 17:52:04.047899 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 6.98s 2025-06-02 17:52:04.047904 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.80s 2025-06-02 17:52:04.047908 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.74s 2025-06-02 17:52:04.047912 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 4.38s 2025-06-02 17:52:04.047916 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.90s 2025-06-02 17:52:04.047920 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.50s 2025-06-02 17:52:04.047924 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.50s 2025-06-02 17:52:04.047929 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.46s 2025-06-02 17:52:04.047933 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.45s 2025-06-02 17:52:04.047937 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 3.09s 2025-06-02 17:52:04.047964 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.95s 2025-06-02 17:52:04.047968 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 2.92s 2025-06-02 17:52:04.047973 | orchestrator | cinder : include_tasks -------------------------------------------------- 2.82s 2025-06-02 17:52:04.047977 | orchestrator | service-cert-copy : cinder | Copying over backend internal TLS key ------ 2.61s 2025-06-02 17:52:04.047985 | orchestrator | 2025-06-02 17:52:04 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:52:04.047990 | orchestrator | 2025-06-02 17:52:04 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:52:04.047994 | orchestrator | 2025-06-02 17:52:04 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:52:04.048088 | orchestrator | 2025-06-02 17:52:04 | INFO  | Task 4ec448f7-8d0b-4ba1-8e1f-f3c144ca7460 is in state STARTED 2025-06-02 17:52:04.048095 | orchestrator | 2025-06-02 17:52:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:52:07.075829 | orchestrator | 2025-06-02 17:52:07 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:52:07.076607 | orchestrator | 2025-06-02 17:52:07 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:52:07.077741 | orchestrator | 2025-06-02 17:52:07 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:52:07.078924 | orchestrator | 2025-06-02 17:52:07 | INFO  | Task 4ec448f7-8d0b-4ba1-8e1f-f3c144ca7460 is in state STARTED 2025-06-02 17:52:07.079522 | orchestrator | 2025-06-02 17:52:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:52:10.111295 | orchestrator | 2025-06-02 17:52:10 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:52:10.111834 | orchestrator | 2025-06-02 17:52:10 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:52:10.113066 | orchestrator | 2025-06-02 17:52:10 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:52:10.114315 | orchestrator | 2025-06-02 17:52:10 | INFO  | Task 4ec448f7-8d0b-4ba1-8e1f-f3c144ca7460 is in state STARTED 2025-06-02 17:52:10.114369 | orchestrator | 2025-06-02 17:52:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:52:13.150567 | orchestrator | 2025-06-02 17:52:13 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:52:13.150815 | orchestrator | 2025-06-02 17:52:13 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:52:13.153479 | orchestrator | 2025-06-02 17:52:13 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:52:13.155037 | orchestrator | 2025-06-02 17:52:13 | INFO  | Task 4ec448f7-8d0b-4ba1-8e1f-f3c144ca7460 is in state STARTED 2025-06-02 17:52:13.155074 | orchestrator | 2025-06-02 17:52:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:52:16.191843 | orchestrator | 2025-06-02 17:52:16 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:52:16.192480 | orchestrator | 2025-06-02 17:52:16 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:52:16.192852 | orchestrator | 2025-06-02 17:52:16 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:52:16.193814 | orchestrator | 2025-06-02 17:52:16 | INFO  | Task 4ec448f7-8d0b-4ba1-8e1f-f3c144ca7460 is in state STARTED 2025-06-02 17:52:16.193839 | orchestrator | 2025-06-02 17:52:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:52:19.236022 | orchestrator | 2025-06-02 17:52:19 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:52:19.236093 | orchestrator | 2025-06-02 17:52:19 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:52:19.236100 | orchestrator | 2025-06-02 17:52:19 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:52:19.236123 | orchestrator | 2025-06-02 17:52:19 | INFO  | Task 4ec448f7-8d0b-4ba1-8e1f-f3c144ca7460 is in state STARTED 2025-06-02 17:52:19.236127 | orchestrator | 2025-06-02 17:52:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:52:22.268140 | orchestrator | 2025-06-02 17:52:22 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:52:22.269014 | orchestrator | 2025-06-02 17:52:22 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:52:22.272121 | orchestrator | 2025-06-02 17:52:22 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:52:22.272174 | orchestrator | 2025-06-02 17:52:22 | INFO  | Task 4ec448f7-8d0b-4ba1-8e1f-f3c144ca7460 is in state STARTED 2025-06-02 17:52:22.272201 | orchestrator | 2025-06-02 17:52:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:52:25.311434 | orchestrator | 2025-06-02 17:52:25 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:52:25.312183 | orchestrator | 2025-06-02 17:52:25 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:52:25.312907 | orchestrator | 2025-06-02 17:52:25 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:52:25.314471 | orchestrator | 2025-06-02 17:52:25 | INFO  | Task 4ec448f7-8d0b-4ba1-8e1f-f3c144ca7460 is in state STARTED 2025-06-02 17:52:25.314538 | orchestrator | 2025-06-02 17:52:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:52:28.348732 | orchestrator | 2025-06-02 17:52:28 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:52:28.348941 | orchestrator | 2025-06-02 17:52:28 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:52:28.349943 | orchestrator | 2025-06-02 17:52:28 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:52:28.350888 | orchestrator | 2025-06-02 17:52:28 | INFO  | Task 4ec448f7-8d0b-4ba1-8e1f-f3c144ca7460 is in state STARTED 2025-06-02 17:52:28.350913 | orchestrator | 2025-06-02 17:52:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:52:31.390684 | orchestrator | 2025-06-02 17:52:31 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:52:31.391595 | orchestrator | 2025-06-02 17:52:31 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:52:31.395675 | orchestrator | 2025-06-02 17:52:31 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:52:31.396795 | orchestrator | 2025-06-02 17:52:31 | INFO  | Task 4ec448f7-8d0b-4ba1-8e1f-f3c144ca7460 is in state STARTED 2025-06-02 17:52:31.396860 | orchestrator | 2025-06-02 17:52:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:52:34.440524 | orchestrator | 2025-06-02 17:52:34 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:52:34.443505 | orchestrator | 2025-06-02 17:52:34 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:52:34.445271 | orchestrator | 2025-06-02 17:52:34 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:52:34.447337 | orchestrator | 2025-06-02 17:52:34 | INFO  | Task 4ec448f7-8d0b-4ba1-8e1f-f3c144ca7460 is in state STARTED 2025-06-02 17:52:34.447426 | orchestrator | 2025-06-02 17:52:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:52:37.484478 | orchestrator | 2025-06-02 17:52:37 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:52:37.484667 | orchestrator | 2025-06-02 17:52:37 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:52:37.485517 | orchestrator | 2025-06-02 17:52:37 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:52:37.486415 | orchestrator | 2025-06-02 17:52:37 | INFO  | Task 4ec448f7-8d0b-4ba1-8e1f-f3c144ca7460 is in state STARTED 2025-06-02 17:52:37.486444 | orchestrator | 2025-06-02 17:52:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:52:40.523798 | orchestrator | 2025-06-02 17:52:40 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:52:40.526891 | orchestrator | 2025-06-02 17:52:40 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:52:40.527021 | orchestrator | 2025-06-02 17:52:40 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:52:40.528022 | orchestrator | 2025-06-02 17:52:40 | INFO  | Task 4ec448f7-8d0b-4ba1-8e1f-f3c144ca7460 is in state STARTED 2025-06-02 17:52:40.528061 | orchestrator | 2025-06-02 17:52:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:52:43.554304 | orchestrator | 2025-06-02 17:52:43 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:52:43.554826 | orchestrator | 2025-06-02 17:52:43 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:52:43.555791 | orchestrator | 2025-06-02 17:52:43 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:52:43.557488 | orchestrator | 2025-06-02 17:52:43 | INFO  | Task 4ec448f7-8d0b-4ba1-8e1f-f3c144ca7460 is in state STARTED 2025-06-02 17:52:43.557517 | orchestrator | 2025-06-02 17:52:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:52:46.582461 | orchestrator | 2025-06-02 17:52:46 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:52:46.582703 | orchestrator | 2025-06-02 17:52:46 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:52:46.583500 | orchestrator | 2025-06-02 17:52:46 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:52:46.584211 | orchestrator | 2025-06-02 17:52:46 | INFO  | Task 4ec448f7-8d0b-4ba1-8e1f-f3c144ca7460 is in state STARTED 2025-06-02 17:52:46.584251 | orchestrator | 2025-06-02 17:52:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:52:49.622998 | orchestrator | 2025-06-02 17:52:49 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:52:49.623500 | orchestrator | 2025-06-02 17:52:49 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:52:49.624811 | orchestrator | 2025-06-02 17:52:49 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:52:49.625701 | orchestrator | 2025-06-02 17:52:49 | INFO  | Task 4ec448f7-8d0b-4ba1-8e1f-f3c144ca7460 is in state STARTED 2025-06-02 17:52:49.625773 | orchestrator | 2025-06-02 17:52:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:52:52.664370 | orchestrator | 2025-06-02 17:52:52 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:52:52.664476 | orchestrator | 2025-06-02 17:52:52 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:52:52.670284 | orchestrator | 2025-06-02 17:52:52 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:52:52.670722 | orchestrator | 2025-06-02 17:52:52 | INFO  | Task 4ec448f7-8d0b-4ba1-8e1f-f3c144ca7460 is in state STARTED 2025-06-02 17:52:52.670790 | orchestrator | 2025-06-02 17:52:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:52:55.702742 | orchestrator | 2025-06-02 17:52:55 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:52:55.702928 | orchestrator | 2025-06-02 17:52:55 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:52:55.704189 | orchestrator | 2025-06-02 17:52:55 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:52:55.704749 | orchestrator | 2025-06-02 17:52:55 | INFO  | Task 4ec448f7-8d0b-4ba1-8e1f-f3c144ca7460 is in state STARTED 2025-06-02 17:52:55.705229 | orchestrator | 2025-06-02 17:52:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:52:58.735645 | orchestrator | 2025-06-02 17:52:58 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:52:58.735724 | orchestrator | 2025-06-02 17:52:58 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:52:58.736223 | orchestrator | 2025-06-02 17:52:58 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:52:58.736722 | orchestrator | 2025-06-02 17:52:58 | INFO  | Task 4ec448f7-8d0b-4ba1-8e1f-f3c144ca7460 is in state STARTED 2025-06-02 17:52:58.736855 | orchestrator | 2025-06-02 17:52:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:53:01.785269 | orchestrator | 2025-06-02 17:53:01 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:53:01.785845 | orchestrator | 2025-06-02 17:53:01 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:53:01.786654 | orchestrator | 2025-06-02 17:53:01 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:53:01.787544 | orchestrator | 2025-06-02 17:53:01 | INFO  | Task 4ec448f7-8d0b-4ba1-8e1f-f3c144ca7460 is in state STARTED 2025-06-02 17:53:01.787581 | orchestrator | 2025-06-02 17:53:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:53:04.813980 | orchestrator | 2025-06-02 17:53:04 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:53:04.814328 | orchestrator | 2025-06-02 17:53:04 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:53:04.814827 | orchestrator | 2025-06-02 17:53:04 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:53:04.816530 | orchestrator | 2025-06-02 17:53:04 | INFO  | Task 4ec448f7-8d0b-4ba1-8e1f-f3c144ca7460 is in state STARTED 2025-06-02 17:53:04.816582 | orchestrator | 2025-06-02 17:53:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:53:07.850732 | orchestrator | 2025-06-02 17:53:07 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:53:07.851113 | orchestrator | 2025-06-02 17:53:07 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:53:07.851842 | orchestrator | 2025-06-02 17:53:07 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:53:07.852607 | orchestrator | 2025-06-02 17:53:07 | INFO  | Task 4ec448f7-8d0b-4ba1-8e1f-f3c144ca7460 is in state STARTED 2025-06-02 17:53:07.852736 | orchestrator | 2025-06-02 17:53:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:53:10.887107 | orchestrator | 2025-06-02 17:53:10 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:53:10.887377 | orchestrator | 2025-06-02 17:53:10 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:53:10.888364 | orchestrator | 2025-06-02 17:53:10 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:53:10.888804 | orchestrator | 2025-06-02 17:53:10 | INFO  | Task 4ec448f7-8d0b-4ba1-8e1f-f3c144ca7460 is in state STARTED 2025-06-02 17:53:10.888990 | orchestrator | 2025-06-02 17:53:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:53:13.921806 | orchestrator | 2025-06-02 17:53:13 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:53:13.922474 | orchestrator | 2025-06-02 17:53:13 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:53:13.923749 | orchestrator | 2025-06-02 17:53:13 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:53:13.925995 | orchestrator | 2025-06-02 17:53:13 | INFO  | Task 4ec448f7-8d0b-4ba1-8e1f-f3c144ca7460 is in state STARTED 2025-06-02 17:53:13.926084 | orchestrator | 2025-06-02 17:53:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:53:16.963141 | orchestrator | 2025-06-02 17:53:16 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:53:16.963391 | orchestrator | 2025-06-02 17:53:16 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:53:16.963873 | orchestrator | 2025-06-02 17:53:16 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:53:16.964678 | orchestrator | 2025-06-02 17:53:16 | INFO  | Task 4ec448f7-8d0b-4ba1-8e1f-f3c144ca7460 is in state STARTED 2025-06-02 17:53:16.964711 | orchestrator | 2025-06-02 17:53:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:53:19.992720 | orchestrator | 2025-06-02 17:53:19 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:53:19.992855 | orchestrator | 2025-06-02 17:53:19 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:53:19.994133 | orchestrator | 2025-06-02 17:53:19 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:53:19.994617 | orchestrator | 2025-06-02 17:53:19 | INFO  | Task 4ec448f7-8d0b-4ba1-8e1f-f3c144ca7460 is in state STARTED 2025-06-02 17:53:19.994644 | orchestrator | 2025-06-02 17:53:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:53:23.024788 | orchestrator | 2025-06-02 17:53:23 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:53:23.025986 | orchestrator | 2025-06-02 17:53:23 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:53:23.029424 | orchestrator | 2025-06-02 17:53:23 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:53:23.029995 | orchestrator | 2025-06-02 17:53:23 | INFO  | Task 4ec448f7-8d0b-4ba1-8e1f-f3c144ca7460 is in state STARTED 2025-06-02 17:53:23.030166 | orchestrator | 2025-06-02 17:53:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:53:26.057075 | orchestrator | 2025-06-02 17:53:26 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:53:26.057569 | orchestrator | 2025-06-02 17:53:26 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:53:26.058306 | orchestrator | 2025-06-02 17:53:26 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:53:26.060747 | orchestrator | 2025-06-02 17:53:26 | INFO  | Task 4ec448f7-8d0b-4ba1-8e1f-f3c144ca7460 is in state STARTED 2025-06-02 17:53:26.060802 | orchestrator | 2025-06-02 17:53:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:53:29.083105 | orchestrator | 2025-06-02 17:53:29 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:53:29.083579 | orchestrator | 2025-06-02 17:53:29 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:53:29.084500 | orchestrator | 2025-06-02 17:53:29 | INFO  | Task cdb39d68-1c23-4e24-b04b-fe866f1e2323 is in state STARTED 2025-06-02 17:53:29.085269 | orchestrator | 2025-06-02 17:53:29 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:53:29.086564 | orchestrator | 2025-06-02 17:53:29 | INFO  | Task 4ec448f7-8d0b-4ba1-8e1f-f3c144ca7460 is in state SUCCESS 2025-06-02 17:53:29.086631 | orchestrator | 2025-06-02 17:53:29.088442 | orchestrator | 2025-06-02 17:53:29.088477 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 17:53:29.088487 | orchestrator | 2025-06-02 17:53:29.088495 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 17:53:29.088503 | orchestrator | Monday 02 June 2025 17:51:17 +0000 (0:00:00.261) 0:00:00.261 *********** 2025-06-02 17:53:29.088511 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:53:29.088520 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:53:29.088528 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:53:29.088536 | orchestrator | 2025-06-02 17:53:29.088545 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 17:53:29.088552 | orchestrator | Monday 02 June 2025 17:51:17 +0000 (0:00:00.299) 0:00:00.561 *********** 2025-06-02 17:53:29.088560 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-06-02 17:53:29.088597 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-06-02 17:53:29.088606 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-06-02 17:53:29.088614 | orchestrator | 2025-06-02 17:53:29.088622 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-06-02 17:53:29.088629 | orchestrator | 2025-06-02 17:53:29.088637 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-02 17:53:29.088644 | orchestrator | Monday 02 June 2025 17:51:18 +0000 (0:00:00.425) 0:00:00.986 *********** 2025-06-02 17:53:29.088652 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:53:29.088661 | orchestrator | 2025-06-02 17:53:29.088670 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-06-02 17:53:29.088678 | orchestrator | Monday 02 June 2025 17:51:18 +0000 (0:00:00.552) 0:00:01.539 *********** 2025-06-02 17:53:29.088690 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-06-02 17:53:29.088698 | orchestrator | 2025-06-02 17:53:29.088706 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-06-02 17:53:29.088714 | orchestrator | Monday 02 June 2025 17:51:22 +0000 (0:00:03.616) 0:00:05.155 *********** 2025-06-02 17:53:29.088738 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-06-02 17:53:29.088748 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-06-02 17:53:29.088757 | orchestrator | 2025-06-02 17:53:29.088764 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-06-02 17:53:29.088771 | orchestrator | Monday 02 June 2025 17:51:28 +0000 (0:00:06.420) 0:00:11.576 *********** 2025-06-02 17:53:29.088779 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 17:53:29.088786 | orchestrator | 2025-06-02 17:53:29.088794 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-06-02 17:53:29.088802 | orchestrator | Monday 02 June 2025 17:51:32 +0000 (0:00:03.341) 0:00:14.917 *********** 2025-06-02 17:53:29.088809 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 17:53:29.088817 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-06-02 17:53:29.088824 | orchestrator | 2025-06-02 17:53:29.088832 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-06-02 17:53:29.088840 | orchestrator | Monday 02 June 2025 17:51:36 +0000 (0:00:04.078) 0:00:18.995 *********** 2025-06-02 17:53:29.088849 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 17:53:29.088910 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-06-02 17:53:29.088921 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-06-02 17:53:29.088928 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-06-02 17:53:29.088936 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-06-02 17:53:29.088944 | orchestrator | 2025-06-02 17:53:29.088951 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-06-02 17:53:29.088959 | orchestrator | Monday 02 June 2025 17:51:52 +0000 (0:00:15.727) 0:00:34.723 *********** 2025-06-02 17:53:29.088966 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-06-02 17:53:29.088974 | orchestrator | 2025-06-02 17:53:29.088981 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-06-02 17:53:29.088989 | orchestrator | Monday 02 June 2025 17:51:56 +0000 (0:00:04.498) 0:00:39.221 *********** 2025-06-02 17:53:29.089000 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 17:53:29.089025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 17:53:29.089035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 17:53:29.089045 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:29.089122 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:29.089135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:29.089153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:29.089163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:29.089212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:29.089223 | orchestrator | 2025-06-02 17:53:29.089238 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-06-02 17:53:29.089247 | orchestrator | Monday 02 June 2025 17:51:58 +0000 (0:00:02.131) 0:00:41.353 *********** 2025-06-02 17:53:29.089257 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-06-02 17:53:29.089271 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-06-02 17:53:29.089279 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-06-02 17:53:29.089287 | orchestrator | 2025-06-02 17:53:29.089295 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-06-02 17:53:29.089304 | orchestrator | Monday 02 June 2025 17:51:59 +0000 (0:00:01.237) 0:00:42.590 *********** 2025-06-02 17:53:29.089312 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:53:29.089320 | orchestrator | 2025-06-02 17:53:29.089328 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-06-02 17:53:29.089336 | orchestrator | Monday 02 June 2025 17:52:00 +0000 (0:00:00.138) 0:00:42.728 *********** 2025-06-02 17:53:29.089344 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:53:29.089353 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:53:29.089362 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:53:29.089370 | orchestrator | 2025-06-02 17:53:29.089380 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-02 17:53:29.089389 | orchestrator | Monday 02 June 2025 17:52:00 +0000 (0:00:00.760) 0:00:43.489 *********** 2025-06-02 17:53:29.089398 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:53:29.089407 | orchestrator | 2025-06-02 17:53:29.089415 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-06-02 17:53:29.089424 | orchestrator | Monday 02 June 2025 17:52:01 +0000 (0:00:00.583) 0:00:44.072 *********** 2025-06-02 17:53:29.089432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 17:53:29.089451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 17:53:29.089465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 17:53:29.089480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:29.089489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:29.089498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:29.089507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:29.089521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:29.089530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:29.089543 | orchestrator | 2025-06-02 17:53:29.089552 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-06-02 17:53:29.089560 | orchestrator | Monday 02 June 2025 17:52:05 +0000 (0:00:04.089) 0:00:48.162 *********** 2025-06-02 17:53:29.089573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 17:53:29.089583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 17:53:29.089593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:53:29.089602 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:53:29.089616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 17:53:29.089624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 17:53:29.089649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:53:29.089657 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:53:29.089666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 17:53:29.089674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 17:53:29.089683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:53:29.089692 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:53:29.089700 | orchestrator | 2025-06-02 17:53:29.089713 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-06-02 17:53:29.089722 | orchestrator | Monday 02 June 2025 17:52:06 +0000 (0:00:01.200) 0:00:49.363 *********** 2025-06-02 17:53:29.089730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 17:53:29.089763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 17:53:29.089782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:53:29.089791 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:53:29.089799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 17:53:29.089808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 17:53:29.089823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:53:29.089837 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:53:29.089845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 17:53:29.089857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 17:53:29.089866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:53:29.089874 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:53:29.089882 | orchestrator | 2025-06-02 17:53:29.089891 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-06-02 17:53:29.089899 | orchestrator | Monday 02 June 2025 17:52:07 +0000 (0:00:00.790) 0:00:50.153 *********** 2025-06-02 17:53:29.089908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 17:53:29.090178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 17:53:29.090220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 17:53:29.090230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:29.090240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:29.090248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:29.090264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:29.090279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:29.090287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:29.090294 | orchestrator | 2025-06-02 17:53:29.090302 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-06-02 17:53:29.090314 | orchestrator | Monday 02 June 2025 17:52:11 +0000 (0:00:03.671) 0:00:53.825 *********** 2025-06-02 17:53:29.090322 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:53:29.090339 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:53:29.090347 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:53:29.090355 | orchestrator | 2025-06-02 17:53:29.090362 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-06-02 17:53:29.090370 | orchestrator | Monday 02 June 2025 17:52:13 +0000 (0:00:02.648) 0:00:56.474 *********** 2025-06-02 17:53:29.090377 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 17:53:29.090385 | orchestrator | 2025-06-02 17:53:29.090391 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-06-02 17:53:29.090398 | orchestrator | Monday 02 June 2025 17:52:15 +0000 (0:00:01.358) 0:00:57.832 *********** 2025-06-02 17:53:29.090404 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:53:29.090410 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:53:29.090416 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:53:29.090423 | orchestrator | 2025-06-02 17:53:29.090429 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-06-02 17:53:29.090436 | orchestrator | Monday 02 June 2025 17:52:16 +0000 (0:00:01.282) 0:00:59.115 *********** 2025-06-02 17:53:29.090443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 17:53:29.090462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 17:53:29.090470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 17:53:29.090481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:29.090489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:29.090495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:29.090507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:29.090519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:29.090526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:29.090533 | orchestrator | 2025-06-02 17:53:29.090540 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-06-02 17:53:29.090547 | orchestrator | Monday 02 June 2025 17:52:24 +0000 (0:00:08.438) 0:01:07.554 *********** 2025-06-02 17:53:29.090558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 17:53:29.090566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 17:53:29.090573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:53:29.090586 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:53:29.090597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 17:53:29.090605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 17:53:29.090613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:53:29.090620 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:53:29.090630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 17:53:29.090637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 17:53:29.090653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:53:29.090660 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:53:29.090667 | orchestrator | 2025-06-02 17:53:29.090674 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-06-02 17:53:29.090681 | orchestrator | Monday 02 June 2025 17:52:27 +0000 (0:00:02.246) 0:01:09.800 *********** 2025-06-02 17:53:29.090692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 17:53:29.090703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 17:53:29.090710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 17:53:29.090722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:29.090729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:29.090743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:29.090750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:29.090762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:29.090770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:29.090781 | orchestrator | 2025-06-02 17:53:29.090788 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-02 17:53:29.090795 | orchestrator | Monday 02 June 2025 17:52:30 +0000 (0:00:03.527) 0:01:13.328 *********** 2025-06-02 17:53:29.090802 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:53:29.090810 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:53:29.090817 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:53:29.090824 | orchestrator | 2025-06-02 17:53:29.090831 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-06-02 17:53:29.090837 | orchestrator | Monday 02 June 2025 17:52:31 +0000 (0:00:00.579) 0:01:13.907 *********** 2025-06-02 17:53:29.090844 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:53:29.090851 | orchestrator | 2025-06-02 17:53:29.090858 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-06-02 17:53:29.090865 | orchestrator | Monday 02 June 2025 17:52:33 +0000 (0:00:02.094) 0:01:16.002 *********** 2025-06-02 17:53:29.090872 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:53:29.090879 | orchestrator | 2025-06-02 17:53:29.090886 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-06-02 17:53:29.090894 | orchestrator | Monday 02 June 2025 17:52:35 +0000 (0:00:02.463) 0:01:18.465 *********** 2025-06-02 17:53:29.090901 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:53:29.090909 | orchestrator | 2025-06-02 17:53:29.090916 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-02 17:53:29.090924 | orchestrator | Monday 02 June 2025 17:52:48 +0000 (0:00:12.692) 0:01:31.157 *********** 2025-06-02 17:53:29.090931 | orchestrator | 2025-06-02 17:53:29.090939 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-02 17:53:29.090947 | orchestrator | Monday 02 June 2025 17:52:48 +0000 (0:00:00.216) 0:01:31.373 *********** 2025-06-02 17:53:29.090954 | orchestrator | 2025-06-02 17:53:29.090962 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-02 17:53:29.090970 | orchestrator | Monday 02 June 2025 17:52:48 +0000 (0:00:00.172) 0:01:31.546 *********** 2025-06-02 17:53:29.090978 | orchestrator | 2025-06-02 17:53:29.090985 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-06-02 17:53:29.090991 | orchestrator | Monday 02 June 2025 17:52:49 +0000 (0:00:00.101) 0:01:31.648 *********** 2025-06-02 17:53:29.090998 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:53:29.091005 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:53:29.091012 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:53:29.091019 | orchestrator | 2025-06-02 17:53:29.091026 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-06-02 17:53:29.091033 | orchestrator | Monday 02 June 2025 17:53:02 +0000 (0:00:13.308) 0:01:44.956 *********** 2025-06-02 17:53:29.091040 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:53:29.091048 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:53:29.091115 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:53:29.091125 | orchestrator | 2025-06-02 17:53:29.091133 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-06-02 17:53:29.091141 | orchestrator | Monday 02 June 2025 17:53:13 +0000 (0:00:11.332) 0:01:56.288 *********** 2025-06-02 17:53:29.091148 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:53:29.091155 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:53:29.091161 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:53:29.091168 | orchestrator | 2025-06-02 17:53:29.091175 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:53:29.091183 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-02 17:53:29.091191 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 17:53:29.091205 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 17:53:29.091211 | orchestrator | 2025-06-02 17:53:29.091218 | orchestrator | 2025-06-02 17:53:29.091225 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:53:29.091232 | orchestrator | Monday 02 June 2025 17:53:25 +0000 (0:00:12.259) 0:02:08.548 *********** 2025-06-02 17:53:29.091239 | orchestrator | =============================================================================== 2025-06-02 17:53:29.091246 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.73s 2025-06-02 17:53:29.091252 | orchestrator | barbican : Restart barbican-api container ------------------------------ 13.31s 2025-06-02 17:53:29.091259 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.69s 2025-06-02 17:53:29.091266 | orchestrator | barbican : Restart barbican-worker container --------------------------- 12.26s 2025-06-02 17:53:29.091277 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 11.33s 2025-06-02 17:53:29.091285 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 8.44s 2025-06-02 17:53:29.091292 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.42s 2025-06-02 17:53:29.091300 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.50s 2025-06-02 17:53:29.091307 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 4.09s 2025-06-02 17:53:29.091314 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.08s 2025-06-02 17:53:29.091322 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.67s 2025-06-02 17:53:29.091329 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.62s 2025-06-02 17:53:29.091336 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.53s 2025-06-02 17:53:29.091343 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.34s 2025-06-02 17:53:29.091349 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.65s 2025-06-02 17:53:29.091355 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.46s 2025-06-02 17:53:29.091362 | orchestrator | barbican : Copying over existing policy file ---------------------------- 2.25s 2025-06-02 17:53:29.091369 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.13s 2025-06-02 17:53:29.091376 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.09s 2025-06-02 17:53:29.091383 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 1.36s 2025-06-02 17:53:29.092303 | orchestrator | 2025-06-02 17:53:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:53:32.111624 | orchestrator | 2025-06-02 17:53:32 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:53:32.111861 | orchestrator | 2025-06-02 17:53:32 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:53:32.113700 | orchestrator | 2025-06-02 17:53:32 | INFO  | Task cdb39d68-1c23-4e24-b04b-fe866f1e2323 is in state STARTED 2025-06-02 17:53:32.114181 | orchestrator | 2025-06-02 17:53:32 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:53:32.114352 | orchestrator | 2025-06-02 17:53:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:53:35.143225 | orchestrator | 2025-06-02 17:53:35 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:53:35.143452 | orchestrator | 2025-06-02 17:53:35 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:53:35.143976 | orchestrator | 2025-06-02 17:53:35 | INFO  | Task cdb39d68-1c23-4e24-b04b-fe866f1e2323 is in state STARTED 2025-06-02 17:53:35.144729 | orchestrator | 2025-06-02 17:53:35 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:53:35.144785 | orchestrator | 2025-06-02 17:53:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:53:38.165952 | orchestrator | 2025-06-02 17:53:38 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:53:38.166207 | orchestrator | 2025-06-02 17:53:38 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:53:38.167108 | orchestrator | 2025-06-02 17:53:38 | INFO  | Task cdb39d68-1c23-4e24-b04b-fe866f1e2323 is in state STARTED 2025-06-02 17:53:38.167525 | orchestrator | 2025-06-02 17:53:38 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:53:38.167601 | orchestrator | 2025-06-02 17:53:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:53:41.186528 | orchestrator | 2025-06-02 17:53:41 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:53:41.186757 | orchestrator | 2025-06-02 17:53:41 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:53:41.187645 | orchestrator | 2025-06-02 17:53:41 | INFO  | Task cdb39d68-1c23-4e24-b04b-fe866f1e2323 is in state STARTED 2025-06-02 17:53:41.188536 | orchestrator | 2025-06-02 17:53:41 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:53:41.188573 | orchestrator | 2025-06-02 17:53:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:53:44.214475 | orchestrator | 2025-06-02 17:53:44 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:53:44.214722 | orchestrator | 2025-06-02 17:53:44 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:53:44.215658 | orchestrator | 2025-06-02 17:53:44 | INFO  | Task cdb39d68-1c23-4e24-b04b-fe866f1e2323 is in state STARTED 2025-06-02 17:53:44.217043 | orchestrator | 2025-06-02 17:53:44 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:53:44.217134 | orchestrator | 2025-06-02 17:53:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:53:47.250224 | orchestrator | 2025-06-02 17:53:47 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:53:47.250414 | orchestrator | 2025-06-02 17:53:47 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:53:47.251548 | orchestrator | 2025-06-02 17:53:47 | INFO  | Task cdb39d68-1c23-4e24-b04b-fe866f1e2323 is in state STARTED 2025-06-02 17:53:47.252779 | orchestrator | 2025-06-02 17:53:47 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:53:47.253111 | orchestrator | 2025-06-02 17:53:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:53:50.288351 | orchestrator | 2025-06-02 17:53:50 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:53:50.289049 | orchestrator | 2025-06-02 17:53:50 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:53:50.290243 | orchestrator | 2025-06-02 17:53:50 | INFO  | Task cdb39d68-1c23-4e24-b04b-fe866f1e2323 is in state STARTED 2025-06-02 17:53:50.293781 | orchestrator | 2025-06-02 17:53:50 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:53:50.293834 | orchestrator | 2025-06-02 17:53:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:53:53.336720 | orchestrator | 2025-06-02 17:53:53 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:53:53.336883 | orchestrator | 2025-06-02 17:53:53 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:53:53.337725 | orchestrator | 2025-06-02 17:53:53 | INFO  | Task cdb39d68-1c23-4e24-b04b-fe866f1e2323 is in state STARTED 2025-06-02 17:53:53.338434 | orchestrator | 2025-06-02 17:53:53 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:53:53.338455 | orchestrator | 2025-06-02 17:53:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:53:56.379614 | orchestrator | 2025-06-02 17:53:56 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:53:56.381673 | orchestrator | 2025-06-02 17:53:56 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:53:56.383917 | orchestrator | 2025-06-02 17:53:56 | INFO  | Task cdb39d68-1c23-4e24-b04b-fe866f1e2323 is in state STARTED 2025-06-02 17:53:56.385745 | orchestrator | 2025-06-02 17:53:56 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:53:56.385835 | orchestrator | 2025-06-02 17:53:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:53:59.439508 | orchestrator | 2025-06-02 17:53:59 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:53:59.440919 | orchestrator | 2025-06-02 17:53:59 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:53:59.441814 | orchestrator | 2025-06-02 17:53:59 | INFO  | Task cdb39d68-1c23-4e24-b04b-fe866f1e2323 is in state STARTED 2025-06-02 17:53:59.446842 | orchestrator | 2025-06-02 17:53:59 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:53:59.446937 | orchestrator | 2025-06-02 17:53:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:54:02.494390 | orchestrator | 2025-06-02 17:54:02 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:54:02.497156 | orchestrator | 2025-06-02 17:54:02 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:54:02.499091 | orchestrator | 2025-06-02 17:54:02 | INFO  | Task cdb39d68-1c23-4e24-b04b-fe866f1e2323 is in state STARTED 2025-06-02 17:54:02.502278 | orchestrator | 2025-06-02 17:54:02 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:54:02.502313 | orchestrator | 2025-06-02 17:54:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:54:05.549404 | orchestrator | 2025-06-02 17:54:05 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:54:05.552234 | orchestrator | 2025-06-02 17:54:05 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:54:05.554468 | orchestrator | 2025-06-02 17:54:05 | INFO  | Task cdb39d68-1c23-4e24-b04b-fe866f1e2323 is in state STARTED 2025-06-02 17:54:05.555422 | orchestrator | 2025-06-02 17:54:05 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:54:05.555497 | orchestrator | 2025-06-02 17:54:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:54:08.620189 | orchestrator | 2025-06-02 17:54:08 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:54:08.621824 | orchestrator | 2025-06-02 17:54:08 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:54:08.624215 | orchestrator | 2025-06-02 17:54:08 | INFO  | Task cdb39d68-1c23-4e24-b04b-fe866f1e2323 is in state STARTED 2025-06-02 17:54:08.625393 | orchestrator | 2025-06-02 17:54:08 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:54:08.625433 | orchestrator | 2025-06-02 17:54:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:54:11.659850 | orchestrator | 2025-06-02 17:54:11 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:54:11.659963 | orchestrator | 2025-06-02 17:54:11 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:54:11.659978 | orchestrator | 2025-06-02 17:54:11 | INFO  | Task cdb39d68-1c23-4e24-b04b-fe866f1e2323 is in state STARTED 2025-06-02 17:54:11.660512 | orchestrator | 2025-06-02 17:54:11 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:54:11.660540 | orchestrator | 2025-06-02 17:54:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:54:14.693290 | orchestrator | 2025-06-02 17:54:14 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:54:14.693972 | orchestrator | 2025-06-02 17:54:14 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:54:14.695953 | orchestrator | 2025-06-02 17:54:14 | INFO  | Task cdb39d68-1c23-4e24-b04b-fe866f1e2323 is in state STARTED 2025-06-02 17:54:14.698818 | orchestrator | 2025-06-02 17:54:14 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:54:14.698844 | orchestrator | 2025-06-02 17:54:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:54:17.738604 | orchestrator | 2025-06-02 17:54:17 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:54:17.739488 | orchestrator | 2025-06-02 17:54:17 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:54:17.740106 | orchestrator | 2025-06-02 17:54:17 | INFO  | Task cdb39d68-1c23-4e24-b04b-fe866f1e2323 is in state SUCCESS 2025-06-02 17:54:17.742274 | orchestrator | 2025-06-02 17:54:17 | INFO  | Task ba6894ca-e351-48fc-9794-6bafee70ebcb is in state STARTED 2025-06-02 17:54:17.742900 | orchestrator | 2025-06-02 17:54:17 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:54:17.742923 | orchestrator | 2025-06-02 17:54:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:54:20.801868 | orchestrator | 2025-06-02 17:54:20 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:54:20.805729 | orchestrator | 2025-06-02 17:54:20 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:54:20.808088 | orchestrator | 2025-06-02 17:54:20 | INFO  | Task ba6894ca-e351-48fc-9794-6bafee70ebcb is in state STARTED 2025-06-02 17:54:20.810274 | orchestrator | 2025-06-02 17:54:20 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:54:20.810324 | orchestrator | 2025-06-02 17:54:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:54:23.847578 | orchestrator | 2025-06-02 17:54:23 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:54:23.847790 | orchestrator | 2025-06-02 17:54:23 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:54:23.848436 | orchestrator | 2025-06-02 17:54:23 | INFO  | Task ba6894ca-e351-48fc-9794-6bafee70ebcb is in state STARTED 2025-06-02 17:54:23.849184 | orchestrator | 2025-06-02 17:54:23 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:54:23.852185 | orchestrator | 2025-06-02 17:54:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:54:26.886577 | orchestrator | 2025-06-02 17:54:26 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:54:26.887792 | orchestrator | 2025-06-02 17:54:26 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:54:26.889475 | orchestrator | 2025-06-02 17:54:26 | INFO  | Task ba6894ca-e351-48fc-9794-6bafee70ebcb is in state STARTED 2025-06-02 17:54:26.891055 | orchestrator | 2025-06-02 17:54:26 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:54:26.891147 | orchestrator | 2025-06-02 17:54:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:54:29.937922 | orchestrator | 2025-06-02 17:54:29 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:54:29.939752 | orchestrator | 2025-06-02 17:54:29 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:54:29.943233 | orchestrator | 2025-06-02 17:54:29 | INFO  | Task ba6894ca-e351-48fc-9794-6bafee70ebcb is in state STARTED 2025-06-02 17:54:29.944594 | orchestrator | 2025-06-02 17:54:29 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:54:29.944631 | orchestrator | 2025-06-02 17:54:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:54:32.981546 | orchestrator | 2025-06-02 17:54:32 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:54:32.984550 | orchestrator | 2025-06-02 17:54:32 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:54:32.987355 | orchestrator | 2025-06-02 17:54:32 | INFO  | Task ba6894ca-e351-48fc-9794-6bafee70ebcb is in state STARTED 2025-06-02 17:54:32.989949 | orchestrator | 2025-06-02 17:54:32 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:54:32.989971 | orchestrator | 2025-06-02 17:54:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:54:36.041372 | orchestrator | 2025-06-02 17:54:36 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:54:36.046376 | orchestrator | 2025-06-02 17:54:36 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:54:36.047385 | orchestrator | 2025-06-02 17:54:36 | INFO  | Task ba6894ca-e351-48fc-9794-6bafee70ebcb is in state STARTED 2025-06-02 17:54:36.047676 | orchestrator | 2025-06-02 17:54:36 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:54:36.047750 | orchestrator | 2025-06-02 17:54:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:54:39.082082 | orchestrator | 2025-06-02 17:54:39 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:54:39.082508 | orchestrator | 2025-06-02 17:54:39 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:54:39.083518 | orchestrator | 2025-06-02 17:54:39 | INFO  | Task ba6894ca-e351-48fc-9794-6bafee70ebcb is in state STARTED 2025-06-02 17:54:39.083951 | orchestrator | 2025-06-02 17:54:39 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:54:39.084235 | orchestrator | 2025-06-02 17:54:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:54:42.113393 | orchestrator | 2025-06-02 17:54:42 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:54:42.114205 | orchestrator | 2025-06-02 17:54:42 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:54:42.115670 | orchestrator | 2025-06-02 17:54:42 | INFO  | Task ba6894ca-e351-48fc-9794-6bafee70ebcb is in state STARTED 2025-06-02 17:54:42.116698 | orchestrator | 2025-06-02 17:54:42 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:54:42.116764 | orchestrator | 2025-06-02 17:54:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:54:45.156758 | orchestrator | 2025-06-02 17:54:45 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:54:45.157462 | orchestrator | 2025-06-02 17:54:45 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:54:45.158848 | orchestrator | 2025-06-02 17:54:45 | INFO  | Task ba6894ca-e351-48fc-9794-6bafee70ebcb is in state STARTED 2025-06-02 17:54:45.162789 | orchestrator | 2025-06-02 17:54:45 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:54:45.162861 | orchestrator | 2025-06-02 17:54:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:54:48.203023 | orchestrator | 2025-06-02 17:54:48 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:54:48.203853 | orchestrator | 2025-06-02 17:54:48 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:54:48.205299 | orchestrator | 2025-06-02 17:54:48 | INFO  | Task ba6894ca-e351-48fc-9794-6bafee70ebcb is in state STARTED 2025-06-02 17:54:48.209674 | orchestrator | 2025-06-02 17:54:48 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:54:48.209757 | orchestrator | 2025-06-02 17:54:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:54:51.253477 | orchestrator | 2025-06-02 17:54:51 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:54:51.255952 | orchestrator | 2025-06-02 17:54:51 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:54:51.257280 | orchestrator | 2025-06-02 17:54:51 | INFO  | Task ba6894ca-e351-48fc-9794-6bafee70ebcb is in state STARTED 2025-06-02 17:54:51.259464 | orchestrator | 2025-06-02 17:54:51 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:54:51.259597 | orchestrator | 2025-06-02 17:54:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:54:54.297513 | orchestrator | 2025-06-02 17:54:54 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:54:54.297609 | orchestrator | 2025-06-02 17:54:54 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:54:54.297645 | orchestrator | 2025-06-02 17:54:54 | INFO  | Task ba6894ca-e351-48fc-9794-6bafee70ebcb is in state STARTED 2025-06-02 17:54:54.298490 | orchestrator | 2025-06-02 17:54:54 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:54:54.298504 | orchestrator | 2025-06-02 17:54:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:54:57.337704 | orchestrator | 2025-06-02 17:54:57 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:54:57.337833 | orchestrator | 2025-06-02 17:54:57 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:54:57.338767 | orchestrator | 2025-06-02 17:54:57 | INFO  | Task ba6894ca-e351-48fc-9794-6bafee70ebcb is in state STARTED 2025-06-02 17:54:57.340265 | orchestrator | 2025-06-02 17:54:57 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:54:57.340293 | orchestrator | 2025-06-02 17:54:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:55:00.385593 | orchestrator | 2025-06-02 17:55:00 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:55:00.386307 | orchestrator | 2025-06-02 17:55:00 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:55:00.387451 | orchestrator | 2025-06-02 17:55:00 | INFO  | Task ba6894ca-e351-48fc-9794-6bafee70ebcb is in state STARTED 2025-06-02 17:55:00.388515 | orchestrator | 2025-06-02 17:55:00 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:55:00.388639 | orchestrator | 2025-06-02 17:55:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:55:03.423179 | orchestrator | 2025-06-02 17:55:03 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:55:03.424585 | orchestrator | 2025-06-02 17:55:03 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:55:03.426586 | orchestrator | 2025-06-02 17:55:03 | INFO  | Task ba6894ca-e351-48fc-9794-6bafee70ebcb is in state STARTED 2025-06-02 17:55:03.428554 | orchestrator | 2025-06-02 17:55:03 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:55:03.428606 | orchestrator | 2025-06-02 17:55:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:55:06.480668 | orchestrator | 2025-06-02 17:55:06 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state STARTED 2025-06-02 17:55:06.480742 | orchestrator | 2025-06-02 17:55:06 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:55:06.482902 | orchestrator | 2025-06-02 17:55:06 | INFO  | Task ba6894ca-e351-48fc-9794-6bafee70ebcb is in state STARTED 2025-06-02 17:55:06.485636 | orchestrator | 2025-06-02 17:55:06 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:55:06.485979 | orchestrator | 2025-06-02 17:55:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:55:09.522387 | orchestrator | 2025-06-02 17:55:09 | INFO  | Task f33b9c35-20bd-461e-ac87-a8e59a612c31 is in state SUCCESS 2025-06-02 17:55:09.523318 | orchestrator | 2025-06-02 17:55:09.523372 | orchestrator | 2025-06-02 17:55:09.523389 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-06-02 17:55:09.523404 | orchestrator | 2025-06-02 17:55:09.523417 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-06-02 17:55:09.523430 | orchestrator | Monday 02 June 2025 17:53:34 +0000 (0:00:00.247) 0:00:00.247 *********** 2025-06-02 17:55:09.523443 | orchestrator | changed: [localhost] 2025-06-02 17:55:09.523457 | orchestrator | 2025-06-02 17:55:09.523471 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-06-02 17:55:09.523483 | orchestrator | Monday 02 June 2025 17:53:35 +0000 (0:00:01.327) 0:00:01.574 *********** 2025-06-02 17:55:09.523496 | orchestrator | changed: [localhost] 2025-06-02 17:55:09.523510 | orchestrator | 2025-06-02 17:55:09.523541 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-06-02 17:55:09.523554 | orchestrator | Monday 02 June 2025 17:54:09 +0000 (0:00:33.466) 0:00:35.041 *********** 2025-06-02 17:55:09.523567 | orchestrator | changed: [localhost] 2025-06-02 17:55:09.523579 | orchestrator | 2025-06-02 17:55:09.523591 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 17:55:09.523604 | orchestrator | 2025-06-02 17:55:09.523616 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 17:55:09.523628 | orchestrator | Monday 02 June 2025 17:54:14 +0000 (0:00:05.525) 0:00:40.566 *********** 2025-06-02 17:55:09.523640 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:55:09.523652 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:55:09.523664 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:55:09.523676 | orchestrator | 2025-06-02 17:55:09.523709 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 17:55:09.523722 | orchestrator | Monday 02 June 2025 17:54:15 +0000 (0:00:00.450) 0:00:41.016 *********** 2025-06-02 17:55:09.523747 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-06-02 17:55:09.523761 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-06-02 17:55:09.523773 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-06-02 17:55:09.523787 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-06-02 17:55:09.523800 | orchestrator | 2025-06-02 17:55:09.523813 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-06-02 17:55:09.524074 | orchestrator | skipping: no hosts matched 2025-06-02 17:55:09.524093 | orchestrator | 2025-06-02 17:55:09.524106 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:55:09.524120 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:55:09.524137 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:55:09.524153 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:55:09.524187 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:55:09.524200 | orchestrator | 2025-06-02 17:55:09.524212 | orchestrator | 2025-06-02 17:55:09.524224 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:55:09.524237 | orchestrator | Monday 02 June 2025 17:54:15 +0000 (0:00:00.447) 0:00:41.464 *********** 2025-06-02 17:55:09.524250 | orchestrator | =============================================================================== 2025-06-02 17:55:09.524264 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 33.47s 2025-06-02 17:55:09.524277 | orchestrator | Download ironic-agent kernel -------------------------------------------- 5.53s 2025-06-02 17:55:09.524287 | orchestrator | Ensure the destination directory exists --------------------------------- 1.33s 2025-06-02 17:55:09.524295 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.45s 2025-06-02 17:55:09.524303 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.45s 2025-06-02 17:55:09.524311 | orchestrator | 2025-06-02 17:55:09.524319 | orchestrator | 2025-06-02 17:55:09.524327 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 17:55:09.524335 | orchestrator | 2025-06-02 17:55:09.524343 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 17:55:09.524351 | orchestrator | Monday 02 June 2025 17:52:08 +0000 (0:00:00.478) 0:00:00.478 *********** 2025-06-02 17:55:09.524358 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:55:09.524367 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:55:09.524374 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:55:09.524382 | orchestrator | 2025-06-02 17:55:09.524390 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 17:55:09.524398 | orchestrator | Monday 02 June 2025 17:52:09 +0000 (0:00:00.574) 0:00:01.052 *********** 2025-06-02 17:55:09.524406 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-06-02 17:55:09.524414 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-06-02 17:55:09.524422 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-06-02 17:55:09.524430 | orchestrator | 2025-06-02 17:55:09.524438 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-06-02 17:55:09.524446 | orchestrator | 2025-06-02 17:55:09.524453 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-02 17:55:09.524461 | orchestrator | Monday 02 June 2025 17:52:09 +0000 (0:00:00.613) 0:00:01.665 *********** 2025-06-02 17:55:09.524469 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:55:09.524477 | orchestrator | 2025-06-02 17:55:09.524485 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-06-02 17:55:09.524493 | orchestrator | Monday 02 June 2025 17:52:10 +0000 (0:00:00.646) 0:00:02.312 *********** 2025-06-02 17:55:09.524520 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-06-02 17:55:09.524528 | orchestrator | 2025-06-02 17:55:09.524536 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-06-02 17:55:09.524544 | orchestrator | Monday 02 June 2025 17:52:14 +0000 (0:00:03.543) 0:00:05.855 *********** 2025-06-02 17:55:09.524563 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-06-02 17:55:09.524571 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-06-02 17:55:09.524579 | orchestrator | 2025-06-02 17:55:09.524594 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-06-02 17:55:09.524603 | orchestrator | Monday 02 June 2025 17:52:20 +0000 (0:00:06.293) 0:00:12.148 *********** 2025-06-02 17:55:09.524611 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 17:55:09.524619 | orchestrator | 2025-06-02 17:55:09.524627 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-06-02 17:55:09.524635 | orchestrator | Monday 02 June 2025 17:52:23 +0000 (0:00:03.368) 0:00:15.516 *********** 2025-06-02 17:55:09.524644 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 17:55:09.524652 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-06-02 17:55:09.524659 | orchestrator | 2025-06-02 17:55:09.524667 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-06-02 17:55:09.524701 | orchestrator | Monday 02 June 2025 17:52:27 +0000 (0:00:04.070) 0:00:19.587 *********** 2025-06-02 17:55:09.524710 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 17:55:09.524730 | orchestrator | 2025-06-02 17:55:09.524738 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-06-02 17:55:09.524746 | orchestrator | Monday 02 June 2025 17:52:31 +0000 (0:00:03.659) 0:00:23.247 *********** 2025-06-02 17:55:09.524754 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-06-02 17:55:09.524822 | orchestrator | 2025-06-02 17:55:09.524838 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-06-02 17:55:09.524851 | orchestrator | Monday 02 June 2025 17:52:35 +0000 (0:00:04.015) 0:00:27.262 *********** 2025-06-02 17:55:09.524865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 17:55:09.524879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 17:55:09.524897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 17:55:09.524919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 17:55:09.524931 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 17:55:09.524939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 17:55:09.524948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.524958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.524966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.524986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.525000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.525009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.525017 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.525026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.525034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.525047 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.525065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.525074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.525082 | orchestrator | 2025-06-02 17:55:09.525090 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-06-02 17:55:09.525098 | orchestrator | Monday 02 June 2025 17:52:39 +0000 (0:00:04.023) 0:00:31.285 *********** 2025-06-02 17:55:09.525106 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:55:09.525114 | orchestrator | 2025-06-02 17:55:09.525122 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-06-02 17:55:09.525130 | orchestrator | Monday 02 June 2025 17:52:39 +0000 (0:00:00.356) 0:00:31.642 *********** 2025-06-02 17:55:09.525138 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:55:09.525145 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:55:09.525153 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:55:09.525199 | orchestrator | 2025-06-02 17:55:09.525208 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-02 17:55:09.525216 | orchestrator | Monday 02 June 2025 17:52:40 +0000 (0:00:00.511) 0:00:32.153 *********** 2025-06-02 17:55:09.525224 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:55:09.525232 | orchestrator | 2025-06-02 17:55:09.525240 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-06-02 17:55:09.525248 | orchestrator | Monday 02 June 2025 17:52:41 +0000 (0:00:00.831) 0:00:32.985 *********** 2025-06-02 17:55:09.525256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 17:55:09.525271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 17:55:09.525291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 17:55:09.525300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 17:55:09.525309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 17:55:09.525317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 17:55:09.525330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.525338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.525353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.525366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.525375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.525383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.525391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.525410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.525418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.525436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.525445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.525453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.525461 | orchestrator | 2025-06-02 17:55:09.525470 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-06-02 17:55:09.525478 | orchestrator | Monday 02 June 2025 17:52:48 +0000 (0:00:07.159) 0:00:40.144 *********** 2025-06-02 17:55:09.525486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 17:55:09.525501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 17:55:09.526221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 17:55:09.526262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 17:55:09.526271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 17:55:09.526280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:55:09.526299 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:55:09.526309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 17:55:09.526318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 17:55:09.526327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 17:55:09.526348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 17:55:09.526357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 17:55:09.526366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:55:09.526380 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:55:09.526388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 17:55:09.526396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 17:55:09.526404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 17:55:09.526418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 17:55:09.526430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 17:55:09.526437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:55:09.526448 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:55:09.526455 | orchestrator | 2025-06-02 17:55:09.526462 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-06-02 17:55:09.526470 | orchestrator | Monday 02 June 2025 17:52:50 +0000 (0:00:02.596) 0:00:42.740 *********** 2025-06-02 17:55:09.526476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 17:55:09.526484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 17:55:09.526491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 17:55:09.526505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 17:55:09.526513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 17:55:09.526524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 17:55:09.526531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 17:55:09.526538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 17:55:09.526549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 17:55:09.526559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 17:55:09.526566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:55:09.526578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 17:55:09.526585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 17:55:09.526592 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:55:09.526600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:55:09.526607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 17:55:09.526614 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:55:09.526625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 17:55:09.526635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 17:55:09.526647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:55:09.526654 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:55:09.526661 | orchestrator | 2025-06-02 17:55:09.526668 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-06-02 17:55:09.526675 | orchestrator | Monday 02 June 2025 17:52:53 +0000 (0:00:02.727) 0:00:45.468 *********** 2025-06-02 17:55:09.526682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 17:55:09.526689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 17:55:09.526701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 17:55:09.526714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 17:55:09.526726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 17:55:09.526734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 17:55:09.526741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.526748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.526760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.526776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.526796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.526809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.526821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.526833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.526847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.526868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.526886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.526895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.526903 | orchestrator | 2025-06-02 17:55:09.526910 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-06-02 17:55:09.526918 | orchestrator | Monday 02 June 2025 17:53:00 +0000 (0:00:07.059) 0:00:52.527 *********** 2025-06-02 17:55:09.526926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 17:55:09.526935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 17:55:09.526948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 17:55:09.526964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 17:55:09.526973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 17:55:09.526981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 17:55:09.526989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.526997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.527005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.527027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.527035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.527042 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.527049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.527056 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.527063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.527073 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.527090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.527097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.527104 | orchestrator | 2025-06-02 17:55:09.527111 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-06-02 17:55:09.527117 | orchestrator | Monday 02 June 2025 17:53:24 +0000 (0:00:24.170) 0:01:16.697 *********** 2025-06-02 17:55:09.527124 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-06-02 17:55:09.527131 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-06-02 17:55:09.527138 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-06-02 17:55:09.527144 | orchestrator | 2025-06-02 17:55:09.527151 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-06-02 17:55:09.527174 | orchestrator | Monday 02 June 2025 17:53:31 +0000 (0:00:06.524) 0:01:23.222 *********** 2025-06-02 17:55:09.527186 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-06-02 17:55:09.527197 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-06-02 17:55:09.527207 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-06-02 17:55:09.527218 | orchestrator | 2025-06-02 17:55:09.527228 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-06-02 17:55:09.527239 | orchestrator | Monday 02 June 2025 17:53:35 +0000 (0:00:04.020) 0:01:27.242 *********** 2025-06-02 17:55:09.527250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 17:55:09.527278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 17:55:09.527295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 17:55:09.527307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 17:55:09.527319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 17:55:09.527331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 17:55:09.527339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 17:55:09.527356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 17:55:09.527369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 17:55:09.527377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 17:55:09.527384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 17:55:09.527390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 17:55:09.527397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 17:55:09.527409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 17:55:09.527423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 17:55:09.527434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.527442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.527449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.527455 | orchestrator | 2025-06-02 17:55:09.527462 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-06-02 17:55:09.527469 | orchestrator | Monday 02 June 2025 17:53:38 +0000 (0:00:03.217) 0:01:30.460 *********** 2025-06-02 17:55:09.527476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 17:55:09.527488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 17:55:09.527503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 17:55:09.527511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 17:55:09.527518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 17:55:09.527525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 17:55:09.527538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 17:55:09.527545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 17:55:09.527605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 17:55:09.527617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 17:55:09.527624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 17:55:09.527631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.527644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 17:55:09.527651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 17:55:09.527661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 17:55:09.527672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 17:55:09.527679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.527686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.527693 | orchestrator | 2025-06-02 17:55:09.527700 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-02 17:55:09.527713 | orchestrator | Monday 02 June 2025 17:53:41 +0000 (0:00:03.297) 0:01:33.758 *********** 2025-06-02 17:55:09.527719 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:55:09.527726 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:55:09.527733 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:55:09.527740 | orchestrator | 2025-06-02 17:55:09.527746 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-06-02 17:55:09.527753 | orchestrator | Monday 02 June 2025 17:53:42 +0000 (0:00:00.680) 0:01:34.438 *********** 2025-06-02 17:55:09.527760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 17:55:09.527769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 17:55:09.527786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 17:55:09.527802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 17:55:09.527814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 17:55:09.527833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:55:09.527845 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:55:09.527853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 17:55:09.527860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 17:55:09.527872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 17:55:09.527883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 17:55:09.527890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 17:55:09.527902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:55:09.527909 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:55:09.527916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 17:55:09.527923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 17:55:09.527935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 17:55:09.527945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 17:55:09.527953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 17:55:09.527966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:55:09.527973 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:55:09.527980 | orchestrator | 2025-06-02 17:55:09.527987 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-06-02 17:55:09.527994 | orchestrator | Monday 02 June 2025 17:53:44 +0000 (0:00:01.416) 0:01:35.854 *********** 2025-06-02 17:55:09.528001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 17:55:09.528008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 17:55:09.528023 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 17:55:09.528030 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 17:55:09.528042 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 17:55:09.528049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 17:55:09.528056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.528063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.528077 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.528085 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.528096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.528103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.528110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.528117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.528127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.528138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.528150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.528157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:55:09.528214 | orchestrator | 2025-06-02 17:55:09.528221 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-02 17:55:09.528228 | orchestrator | Monday 02 June 2025 17:53:48 +0000 (0:00:04.663) 0:01:40.517 *********** 2025-06-02 17:55:09.528235 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:55:09.528241 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:55:09.528248 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:55:09.528255 | orchestrator | 2025-06-02 17:55:09.528261 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-06-02 17:55:09.528271 | orchestrator | Monday 02 June 2025 17:53:49 +0000 (0:00:00.384) 0:01:40.902 *********** 2025-06-02 17:55:09.528284 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-06-02 17:55:09.528297 | orchestrator | 2025-06-02 17:55:09.528315 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-06-02 17:55:09.528326 | orchestrator | Monday 02 June 2025 17:53:52 +0000 (0:00:03.446) 0:01:44.348 *********** 2025-06-02 17:55:09.528337 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 17:55:09.528348 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-06-02 17:55:09.528358 | orchestrator | 2025-06-02 17:55:09.528370 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-06-02 17:55:09.528381 | orchestrator | Monday 02 June 2025 17:53:54 +0000 (0:00:02.235) 0:01:46.584 *********** 2025-06-02 17:55:09.528392 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:55:09.528405 | orchestrator | 2025-06-02 17:55:09.528412 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-06-02 17:55:09.528419 | orchestrator | Monday 02 June 2025 17:54:11 +0000 (0:00:16.396) 0:02:02.980 *********** 2025-06-02 17:55:09.528425 | orchestrator | 2025-06-02 17:55:09.528432 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-06-02 17:55:09.528439 | orchestrator | Monday 02 June 2025 17:54:11 +0000 (0:00:00.100) 0:02:03.080 *********** 2025-06-02 17:55:09.528445 | orchestrator | 2025-06-02 17:55:09.528452 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-06-02 17:55:09.528458 | orchestrator | Monday 02 June 2025 17:54:11 +0000 (0:00:00.159) 0:02:03.240 *********** 2025-06-02 17:55:09.528465 | orchestrator | 2025-06-02 17:55:09.528471 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-06-02 17:55:09.528478 | orchestrator | Monday 02 June 2025 17:54:11 +0000 (0:00:00.162) 0:02:03.403 *********** 2025-06-02 17:55:09.528485 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:55:09.528491 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:55:09.528504 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:55:09.528511 | orchestrator | 2025-06-02 17:55:09.528518 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-06-02 17:55:09.528525 | orchestrator | Monday 02 June 2025 17:54:21 +0000 (0:00:09.828) 0:02:13.231 *********** 2025-06-02 17:55:09.528531 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:55:09.528538 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:55:09.528544 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:55:09.528551 | orchestrator | 2025-06-02 17:55:09.528557 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-06-02 17:55:09.528569 | orchestrator | Monday 02 June 2025 17:54:34 +0000 (0:00:12.798) 0:02:26.030 *********** 2025-06-02 17:55:09.528578 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:55:09.528588 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:55:09.528603 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:55:09.528621 | orchestrator | 2025-06-02 17:55:09.528631 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-06-02 17:55:09.528640 | orchestrator | Monday 02 June 2025 17:54:39 +0000 (0:00:05.313) 0:02:31.344 *********** 2025-06-02 17:55:09.528650 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:55:09.528659 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:55:09.528669 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:55:09.528678 | orchestrator | 2025-06-02 17:55:09.528695 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-06-02 17:55:09.528704 | orchestrator | Monday 02 June 2025 17:54:47 +0000 (0:00:08.029) 0:02:39.373 *********** 2025-06-02 17:55:09.528713 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:55:09.528723 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:55:09.528731 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:55:09.528740 | orchestrator | 2025-06-02 17:55:09.528750 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-06-02 17:55:09.528760 | orchestrator | Monday 02 June 2025 17:54:53 +0000 (0:00:05.843) 0:02:45.217 *********** 2025-06-02 17:55:09.528770 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:55:09.528780 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:55:09.528790 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:55:09.528800 | orchestrator | 2025-06-02 17:55:09.528809 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-06-02 17:55:09.528818 | orchestrator | Monday 02 June 2025 17:55:00 +0000 (0:00:06.962) 0:02:52.180 *********** 2025-06-02 17:55:09.528827 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:55:09.528836 | orchestrator | 2025-06-02 17:55:09.528845 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:55:09.528855 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-02 17:55:09.528867 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 17:55:09.528878 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 17:55:09.528889 | orchestrator | 2025-06-02 17:55:09.528900 | orchestrator | 2025-06-02 17:55:09.528909 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:55:09.528915 | orchestrator | Monday 02 June 2025 17:55:08 +0000 (0:00:07.800) 0:02:59.980 *********** 2025-06-02 17:55:09.528921 | orchestrator | =============================================================================== 2025-06-02 17:55:09.528928 | orchestrator | designate : Copying over designate.conf -------------------------------- 24.17s 2025-06-02 17:55:09.528934 | orchestrator | designate : Running Designate bootstrap container ---------------------- 16.40s 2025-06-02 17:55:09.528940 | orchestrator | designate : Restart designate-api container ---------------------------- 12.80s 2025-06-02 17:55:09.528946 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 9.83s 2025-06-02 17:55:09.528959 | orchestrator | designate : Restart designate-producer container ------------------------ 8.03s 2025-06-02 17:55:09.528966 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.80s 2025-06-02 17:55:09.528972 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 7.16s 2025-06-02 17:55:09.528978 | orchestrator | designate : Copying over config.json files for services ----------------- 7.06s 2025-06-02 17:55:09.528984 | orchestrator | designate : Restart designate-worker container -------------------------- 6.96s 2025-06-02 17:55:09.528991 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 6.52s 2025-06-02 17:55:09.528997 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.29s 2025-06-02 17:55:09.529003 | orchestrator | designate : Restart designate-mdns container ---------------------------- 5.84s 2025-06-02 17:55:09.529009 | orchestrator | designate : Restart designate-central container ------------------------- 5.31s 2025-06-02 17:55:09.529015 | orchestrator | designate : Check designate containers ---------------------------------- 4.66s 2025-06-02 17:55:09.529022 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.07s 2025-06-02 17:55:09.529028 | orchestrator | designate : Ensuring config directories exist --------------------------- 4.02s 2025-06-02 17:55:09.529034 | orchestrator | designate : Copying over named.conf ------------------------------------- 4.02s 2025-06-02 17:55:09.529040 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.02s 2025-06-02 17:55:09.529046 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.66s 2025-06-02 17:55:09.529052 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.54s 2025-06-02 17:55:09.529059 | orchestrator | 2025-06-02 17:55:09 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:55:09.529065 | orchestrator | 2025-06-02 17:55:09 | INFO  | Task ba6894ca-e351-48fc-9794-6bafee70ebcb is in state STARTED 2025-06-02 17:55:09.529071 | orchestrator | 2025-06-02 17:55:09 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:55:09.529078 | orchestrator | 2025-06-02 17:55:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:55:12.568531 | orchestrator | 2025-06-02 17:55:12 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:55:12.570296 | orchestrator | 2025-06-02 17:55:12 | INFO  | Task ba6894ca-e351-48fc-9794-6bafee70ebcb is in state STARTED 2025-06-02 17:55:12.572978 | orchestrator | 2025-06-02 17:55:12 | INFO  | Task 6ea8524f-037b-4db1-b3fe-758883a7c58b is in state STARTED 2025-06-02 17:55:12.574394 | orchestrator | 2025-06-02 17:55:12 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:55:12.574458 | orchestrator | 2025-06-02 17:55:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:55:15.614135 | orchestrator | 2025-06-02 17:55:15 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:55:15.617040 | orchestrator | 2025-06-02 17:55:15 | INFO  | Task ba6894ca-e351-48fc-9794-6bafee70ebcb is in state STARTED 2025-06-02 17:55:15.619081 | orchestrator | 2025-06-02 17:55:15 | INFO  | Task 6ea8524f-037b-4db1-b3fe-758883a7c58b is in state STARTED 2025-06-02 17:55:15.621506 | orchestrator | 2025-06-02 17:55:15 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:55:15.621571 | orchestrator | 2025-06-02 17:55:15 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:55:18.670381 | orchestrator | 2025-06-02 17:55:18 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:55:18.672461 | orchestrator | 2025-06-02 17:55:18 | INFO  | Task ba6894ca-e351-48fc-9794-6bafee70ebcb is in state STARTED 2025-06-02 17:55:18.674826 | orchestrator | 2025-06-02 17:55:18 | INFO  | Task 6ea8524f-037b-4db1-b3fe-758883a7c58b is in state STARTED 2025-06-02 17:55:18.676382 | orchestrator | 2025-06-02 17:55:18 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:55:18.676414 | orchestrator | 2025-06-02 17:55:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:55:21.719481 | orchestrator | 2025-06-02 17:55:21 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:55:21.721852 | orchestrator | 2025-06-02 17:55:21 | INFO  | Task ba6894ca-e351-48fc-9794-6bafee70ebcb is in state STARTED 2025-06-02 17:55:21.721919 | orchestrator | 2025-06-02 17:55:21 | INFO  | Task 6ea8524f-037b-4db1-b3fe-758883a7c58b is in state STARTED 2025-06-02 17:55:21.722968 | orchestrator | 2025-06-02 17:55:21 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:55:21.723022 | orchestrator | 2025-06-02 17:55:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:55:24.768823 | orchestrator | 2025-06-02 17:55:24 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:55:24.771363 | orchestrator | 2025-06-02 17:55:24 | INFO  | Task ba6894ca-e351-48fc-9794-6bafee70ebcb is in state STARTED 2025-06-02 17:55:24.773907 | orchestrator | 2025-06-02 17:55:24 | INFO  | Task 6ea8524f-037b-4db1-b3fe-758883a7c58b is in state STARTED 2025-06-02 17:55:24.774081 | orchestrator | 2025-06-02 17:55:24 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:55:24.774470 | orchestrator | 2025-06-02 17:55:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:55:27.823671 | orchestrator | 2025-06-02 17:55:27 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:55:27.824763 | orchestrator | 2025-06-02 17:55:27 | INFO  | Task ba6894ca-e351-48fc-9794-6bafee70ebcb is in state STARTED 2025-06-02 17:55:27.827708 | orchestrator | 2025-06-02 17:55:27 | INFO  | Task 6ea8524f-037b-4db1-b3fe-758883a7c58b is in state STARTED 2025-06-02 17:55:27.829270 | orchestrator | 2025-06-02 17:55:27 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:55:27.829330 | orchestrator | 2025-06-02 17:55:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:55:30.866303 | orchestrator | 2025-06-02 17:55:30 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:55:30.866416 | orchestrator | 2025-06-02 17:55:30 | INFO  | Task ba6894ca-e351-48fc-9794-6bafee70ebcb is in state STARTED 2025-06-02 17:55:30.867467 | orchestrator | 2025-06-02 17:55:30 | INFO  | Task 6ea8524f-037b-4db1-b3fe-758883a7c58b is in state STARTED 2025-06-02 17:55:30.870954 | orchestrator | 2025-06-02 17:55:30 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state STARTED 2025-06-02 17:55:30.871021 | orchestrator | 2025-06-02 17:55:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:55:33.927257 | orchestrator | 2025-06-02 17:55:33 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:55:33.931890 | orchestrator | 2025-06-02 17:55:33.931987 | orchestrator | 2025-06-02 17:55:33.931996 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 17:55:33.932005 | orchestrator | 2025-06-02 17:55:33.932013 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 17:55:33.932020 | orchestrator | Monday 02 June 2025 17:54:20 +0000 (0:00:00.657) 0:00:00.657 *********** 2025-06-02 17:55:33.932028 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:55:33.932035 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:55:33.932043 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:55:33.932072 | orchestrator | 2025-06-02 17:55:33.932080 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 17:55:33.932099 | orchestrator | Monday 02 June 2025 17:54:20 +0000 (0:00:00.407) 0:00:01.065 *********** 2025-06-02 17:55:33.932107 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-06-02 17:55:33.932114 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-06-02 17:55:33.932120 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-06-02 17:55:33.932125 | orchestrator | 2025-06-02 17:55:33.932131 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-06-02 17:55:33.932137 | orchestrator | 2025-06-02 17:55:33.932143 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-06-02 17:55:33.932152 | orchestrator | Monday 02 June 2025 17:54:21 +0000 (0:00:00.470) 0:00:01.535 *********** 2025-06-02 17:55:33.932160 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:55:33.932168 | orchestrator | 2025-06-02 17:55:33.932176 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-06-02 17:55:33.932231 | orchestrator | Monday 02 June 2025 17:54:21 +0000 (0:00:00.689) 0:00:02.228 *********** 2025-06-02 17:55:33.932239 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-06-02 17:55:33.932247 | orchestrator | 2025-06-02 17:55:33.932254 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-06-02 17:55:33.932261 | orchestrator | Monday 02 June 2025 17:54:25 +0000 (0:00:03.890) 0:00:06.118 *********** 2025-06-02 17:55:33.932267 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-06-02 17:55:33.932275 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-06-02 17:55:33.932282 | orchestrator | 2025-06-02 17:55:33.932288 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-06-02 17:55:33.932293 | orchestrator | Monday 02 June 2025 17:54:32 +0000 (0:00:06.896) 0:00:13.015 *********** 2025-06-02 17:55:33.932299 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 17:55:33.932305 | orchestrator | 2025-06-02 17:55:33.932311 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-06-02 17:55:33.932317 | orchestrator | Monday 02 June 2025 17:54:36 +0000 (0:00:03.534) 0:00:16.550 *********** 2025-06-02 17:55:33.932323 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 17:55:33.932330 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-06-02 17:55:33.932337 | orchestrator | 2025-06-02 17:55:33.932343 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-06-02 17:55:33.932350 | orchestrator | Monday 02 June 2025 17:54:40 +0000 (0:00:04.113) 0:00:20.663 *********** 2025-06-02 17:55:33.932357 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 17:55:33.932364 | orchestrator | 2025-06-02 17:55:33.932370 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-06-02 17:55:33.932377 | orchestrator | Monday 02 June 2025 17:54:43 +0000 (0:00:03.447) 0:00:24.111 *********** 2025-06-02 17:55:33.932383 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-06-02 17:55:33.932389 | orchestrator | 2025-06-02 17:55:33.932396 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-06-02 17:55:33.932417 | orchestrator | Monday 02 June 2025 17:54:47 +0000 (0:00:04.224) 0:00:28.335 *********** 2025-06-02 17:55:33.932426 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:55:33.932435 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:55:33.932443 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:55:33.932451 | orchestrator | 2025-06-02 17:55:33.932458 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-06-02 17:55:33.932467 | orchestrator | Monday 02 June 2025 17:54:48 +0000 (0:00:00.281) 0:00:28.617 *********** 2025-06-02 17:55:33.932487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 17:55:33.932524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 17:55:33.932532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 17:55:33.932540 | orchestrator | 2025-06-02 17:55:33.932549 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-06-02 17:55:33.932557 | orchestrator | Monday 02 June 2025 17:54:49 +0000 (0:00:01.010) 0:00:29.628 *********** 2025-06-02 17:55:33.932565 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:55:33.932573 | orchestrator | 2025-06-02 17:55:33.932581 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-06-02 17:55:33.932589 | orchestrator | Monday 02 June 2025 17:54:49 +0000 (0:00:00.109) 0:00:29.737 *********** 2025-06-02 17:55:33.932650 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:55:33.932659 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:55:33.932668 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:55:33.932675 | orchestrator | 2025-06-02 17:55:33.932684 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-06-02 17:55:33.932691 | orchestrator | Monday 02 June 2025 17:54:49 +0000 (0:00:00.524) 0:00:30.261 *********** 2025-06-02 17:55:33.932698 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:55:33.932705 | orchestrator | 2025-06-02 17:55:33.932712 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-06-02 17:55:33.932725 | orchestrator | Monday 02 June 2025 17:54:50 +0000 (0:00:00.475) 0:00:30.736 *********** 2025-06-02 17:55:33.932732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 17:55:33.932748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 17:55:33.932761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 17:55:33.932768 | orchestrator | 2025-06-02 17:55:33.932776 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-06-02 17:55:33.932782 | orchestrator | Monday 02 June 2025 17:54:51 +0000 (0:00:01.403) 0:00:32.140 *********** 2025-06-02 17:55:33.932788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 17:55:33.932800 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:55:33.932808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 17:55:33.932816 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:55:33.932831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 17:55:33.932839 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:55:33.932847 | orchestrator | 2025-06-02 17:55:33.932854 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-06-02 17:55:33.932863 | orchestrator | Monday 02 June 2025 17:54:52 +0000 (0:00:00.601) 0:00:32.742 *********** 2025-06-02 17:55:33.932874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 17:55:33.932883 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:55:33.932890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 17:55:33.932904 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:55:33.932912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 17:55:33.932920 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:55:33.932927 | orchestrator | 2025-06-02 17:55:33.932934 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-06-02 17:55:33.932941 | orchestrator | Monday 02 June 2025 17:54:52 +0000 (0:00:00.584) 0:00:33.326 *********** 2025-06-02 17:55:33.932954 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 17:55:33.932965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 17:55:33.932973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 17:55:33.932985 | orchestrator | 2025-06-02 17:55:33.932993 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-06-02 17:55:33.933000 | orchestrator | Monday 02 June 2025 17:54:54 +0000 (0:00:01.403) 0:00:34.730 *********** 2025-06-02 17:55:33.933007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 17:55:33.933014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 17:55:33.933032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 17:55:33.933039 | orchestrator | 2025-06-02 17:55:33.933046 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-06-02 17:55:33.933054 | orchestrator | Monday 02 June 2025 17:54:57 +0000 (0:00:02.776) 0:00:37.507 *********** 2025-06-02 17:55:33.933060 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-06-02 17:55:33.933068 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-06-02 17:55:33.933076 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-06-02 17:55:33.933087 | orchestrator | 2025-06-02 17:55:33.933094 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-06-02 17:55:33.933101 | orchestrator | Monday 02 June 2025 17:54:58 +0000 (0:00:01.753) 0:00:39.260 *********** 2025-06-02 17:55:33.933109 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:55:33.933115 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:55:33.933121 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:55:33.933127 | orchestrator | 2025-06-02 17:55:33.933132 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-06-02 17:55:33.933138 | orchestrator | Monday 02 June 2025 17:55:00 +0000 (0:00:01.534) 0:00:40.795 *********** 2025-06-02 17:55:33.933145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 17:55:33.933152 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:55:33.933159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 17:55:33.933166 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:55:33.933210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 17:55:33.933219 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:55:33.933226 | orchestrator | 2025-06-02 17:55:33.933234 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-06-02 17:55:33.933241 | orchestrator | Monday 02 June 2025 17:55:01 +0000 (0:00:00.861) 0:00:41.657 *********** 2025-06-02 17:55:33.933248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 17:55:33.933266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 17:55:33.933274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 17:55:33.933282 | orchestrator | 2025-06-02 17:55:33.933290 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-06-02 17:55:33.933297 | orchestrator | Monday 02 June 2025 17:55:02 +0000 (0:00:01.327) 0:00:42.984 *********** 2025-06-02 17:55:33.933304 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:55:33.933311 | orchestrator | 2025-06-02 17:55:33.933318 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-06-02 17:55:33.933326 | orchestrator | Monday 02 June 2025 17:55:04 +0000 (0:00:02.141) 0:00:45.126 *********** 2025-06-02 17:55:33.933332 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:55:33.933339 | orchestrator | 2025-06-02 17:55:33.933346 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-06-02 17:55:33.933353 | orchestrator | Monday 02 June 2025 17:55:06 +0000 (0:00:02.245) 0:00:47.372 *********** 2025-06-02 17:55:33.933365 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:55:33.933373 | orchestrator | 2025-06-02 17:55:33.933380 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-06-02 17:55:33.933387 | orchestrator | Monday 02 June 2025 17:55:20 +0000 (0:00:13.144) 0:01:00.517 *********** 2025-06-02 17:55:33.933393 | orchestrator | 2025-06-02 17:55:33.933400 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-06-02 17:55:33.933412 | orchestrator | Monday 02 June 2025 17:55:20 +0000 (0:00:00.060) 0:01:00.577 *********** 2025-06-02 17:55:33.933420 | orchestrator | 2025-06-02 17:55:33.933427 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-06-02 17:55:33.933438 | orchestrator | Monday 02 June 2025 17:55:20 +0000 (0:00:00.064) 0:01:00.641 *********** 2025-06-02 17:55:33.933445 | orchestrator | 2025-06-02 17:55:33.933452 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-06-02 17:55:33.933459 | orchestrator | Monday 02 June 2025 17:55:20 +0000 (0:00:00.070) 0:01:00.711 *********** 2025-06-02 17:55:33.933466 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:55:33.933473 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:55:33.933480 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:55:33.933487 | orchestrator | 2025-06-02 17:55:33.933494 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:55:33.933502 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 17:55:33.933511 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 17:55:33.933518 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 17:55:33.933525 | orchestrator | 2025-06-02 17:55:33.933532 | orchestrator | 2025-06-02 17:55:33.933539 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:55:33.933546 | orchestrator | Monday 02 June 2025 17:55:31 +0000 (0:00:10.895) 0:01:11.607 *********** 2025-06-02 17:55:33.933553 | orchestrator | =============================================================================== 2025-06-02 17:55:33.933561 | orchestrator | placement : Running placement bootstrap container ---------------------- 13.14s 2025-06-02 17:55:33.933568 | orchestrator | placement : Restart placement-api container ---------------------------- 10.90s 2025-06-02 17:55:33.933575 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.90s 2025-06-02 17:55:33.933581 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.22s 2025-06-02 17:55:33.933589 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.11s 2025-06-02 17:55:33.933596 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.89s 2025-06-02 17:55:33.933604 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.53s 2025-06-02 17:55:33.933611 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.45s 2025-06-02 17:55:33.933619 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.78s 2025-06-02 17:55:33.933626 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.25s 2025-06-02 17:55:33.933633 | orchestrator | placement : Creating placement databases -------------------------------- 2.14s 2025-06-02 17:55:33.933640 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.75s 2025-06-02 17:55:33.933648 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.53s 2025-06-02 17:55:33.933655 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.40s 2025-06-02 17:55:33.933663 | orchestrator | placement : Copying over config.json files for services ----------------- 1.40s 2025-06-02 17:55:33.933670 | orchestrator | placement : Check placement containers ---------------------------------- 1.33s 2025-06-02 17:55:33.933678 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.01s 2025-06-02 17:55:33.933685 | orchestrator | placement : Copying over existing policy file --------------------------- 0.86s 2025-06-02 17:55:33.933693 | orchestrator | placement : include_tasks ----------------------------------------------- 0.69s 2025-06-02 17:55:33.933700 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.60s 2025-06-02 17:55:33.933714 | orchestrator | 2025-06-02 17:55:33 | INFO  | Task ba6894ca-e351-48fc-9794-6bafee70ebcb is in state SUCCESS 2025-06-02 17:55:33.933818 | orchestrator | 2025-06-02 17:55:33 | INFO  | Task 9082ddf3-cd2a-48cd-9d2d-2c2b5ce2dfc9 is in state STARTED 2025-06-02 17:55:33.935268 | orchestrator | 2025-06-02 17:55:33 | INFO  | Task 6ea8524f-037b-4db1-b3fe-758883a7c58b is in state STARTED 2025-06-02 17:55:33.938657 | orchestrator | 2025-06-02 17:55:33 | INFO  | Task 65b4a3ae-3ce0-4433-b05a-aae9aa5d180e is in state SUCCESS 2025-06-02 17:55:33.940609 | orchestrator | 2025-06-02 17:55:33.940661 | orchestrator | 2025-06-02 17:55:33.940670 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 17:55:33.940678 | orchestrator | 2025-06-02 17:55:33.940685 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 17:55:33.940692 | orchestrator | Monday 02 June 2025 17:51:03 +0000 (0:00:00.281) 0:00:00.281 *********** 2025-06-02 17:55:33.940698 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:55:33.940706 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:55:33.940713 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:55:33.940720 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:55:33.940726 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:55:33.940732 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:55:33.940739 | orchestrator | 2025-06-02 17:55:33.940745 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 17:55:33.940751 | orchestrator | Monday 02 June 2025 17:51:04 +0000 (0:00:00.710) 0:00:00.992 *********** 2025-06-02 17:55:33.940757 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-06-02 17:55:33.940764 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-06-02 17:55:33.940770 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-06-02 17:55:33.940777 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-06-02 17:55:33.940797 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-06-02 17:55:33.940804 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-06-02 17:55:33.940810 | orchestrator | 2025-06-02 17:55:33.940816 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-06-02 17:55:33.940823 | orchestrator | 2025-06-02 17:55:33.940828 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-02 17:55:33.940834 | orchestrator | Monday 02 June 2025 17:51:05 +0000 (0:00:00.596) 0:00:01.589 *********** 2025-06-02 17:55:33.940841 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:55:33.940849 | orchestrator | 2025-06-02 17:55:33.940855 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-06-02 17:55:33.940861 | orchestrator | Monday 02 June 2025 17:51:06 +0000 (0:00:01.702) 0:00:03.291 *********** 2025-06-02 17:55:33.940868 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:55:33.940874 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:55:33.940881 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:55:33.940888 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:55:33.940894 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:55:33.940900 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:55:33.940907 | orchestrator | 2025-06-02 17:55:33.940913 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-06-02 17:55:33.940920 | orchestrator | Monday 02 June 2025 17:51:08 +0000 (0:00:01.777) 0:00:05.069 *********** 2025-06-02 17:55:33.940928 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:55:33.940935 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:55:33.940941 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:55:33.940947 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:55:33.940954 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:55:33.940960 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:55:33.940966 | orchestrator | 2025-06-02 17:55:33.940972 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-06-02 17:55:33.941033 | orchestrator | Monday 02 June 2025 17:51:09 +0000 (0:00:01.118) 0:00:06.188 *********** 2025-06-02 17:55:33.941041 | orchestrator | ok: [testbed-node-0] => { 2025-06-02 17:55:33.941068 | orchestrator |  "changed": false, 2025-06-02 17:55:33.941075 | orchestrator |  "msg": "All assertions passed" 2025-06-02 17:55:33.941083 | orchestrator | } 2025-06-02 17:55:33.941097 | orchestrator | ok: [testbed-node-1] => { 2025-06-02 17:55:33.941104 | orchestrator |  "changed": false, 2025-06-02 17:55:33.941110 | orchestrator |  "msg": "All assertions passed" 2025-06-02 17:55:33.941117 | orchestrator | } 2025-06-02 17:55:33.941123 | orchestrator | ok: [testbed-node-2] => { 2025-06-02 17:55:33.941129 | orchestrator |  "changed": false, 2025-06-02 17:55:33.941135 | orchestrator |  "msg": "All assertions passed" 2025-06-02 17:55:33.941141 | orchestrator | } 2025-06-02 17:55:33.941147 | orchestrator | ok: [testbed-node-3] => { 2025-06-02 17:55:33.941153 | orchestrator |  "changed": false, 2025-06-02 17:55:33.941160 | orchestrator |  "msg": "All assertions passed" 2025-06-02 17:55:33.941166 | orchestrator | } 2025-06-02 17:55:33.941173 | orchestrator | ok: [testbed-node-4] => { 2025-06-02 17:55:33.941226 | orchestrator |  "changed": false, 2025-06-02 17:55:33.941236 | orchestrator |  "msg": "All assertions passed" 2025-06-02 17:55:33.941243 | orchestrator | } 2025-06-02 17:55:33.941250 | orchestrator | ok: [testbed-node-5] => { 2025-06-02 17:55:33.941256 | orchestrator |  "changed": false, 2025-06-02 17:55:33.941264 | orchestrator |  "msg": "All assertions passed" 2025-06-02 17:55:33.941271 | orchestrator | } 2025-06-02 17:55:33.941278 | orchestrator | 2025-06-02 17:55:33.941286 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-06-02 17:55:33.941294 | orchestrator | Monday 02 June 2025 17:51:10 +0000 (0:00:00.788) 0:00:06.976 *********** 2025-06-02 17:55:33.941301 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:55:33.941308 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:55:33.941315 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:55:33.941321 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:55:33.941337 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:55:33.941346 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:55:33.941353 | orchestrator | 2025-06-02 17:55:33.941359 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-06-02 17:55:33.941366 | orchestrator | Monday 02 June 2025 17:51:11 +0000 (0:00:00.619) 0:00:07.596 *********** 2025-06-02 17:55:33.941372 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-06-02 17:55:33.941379 | orchestrator | 2025-06-02 17:55:33.941386 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-06-02 17:55:33.941391 | orchestrator | Monday 02 June 2025 17:51:14 +0000 (0:00:03.328) 0:00:10.925 *********** 2025-06-02 17:55:33.941398 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-06-02 17:55:33.941407 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-06-02 17:55:33.941413 | orchestrator | 2025-06-02 17:55:33.941435 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-06-02 17:55:33.941442 | orchestrator | Monday 02 June 2025 17:51:20 +0000 (0:00:06.412) 0:00:17.338 *********** 2025-06-02 17:55:33.941449 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 17:55:33.941455 | orchestrator | 2025-06-02 17:55:33.941461 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-06-02 17:55:33.941467 | orchestrator | Monday 02 June 2025 17:51:23 +0000 (0:00:03.110) 0:00:20.448 *********** 2025-06-02 17:55:33.941473 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 17:55:33.941480 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-06-02 17:55:33.941485 | orchestrator | 2025-06-02 17:55:33.941491 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-06-02 17:55:33.941586 | orchestrator | Monday 02 June 2025 17:51:27 +0000 (0:00:03.815) 0:00:24.263 *********** 2025-06-02 17:55:33.941605 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 17:55:33.941611 | orchestrator | 2025-06-02 17:55:33.941617 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-06-02 17:55:33.941623 | orchestrator | Monday 02 June 2025 17:51:31 +0000 (0:00:03.679) 0:00:27.943 *********** 2025-06-02 17:55:33.941629 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-06-02 17:55:33.941645 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-06-02 17:55:33.941652 | orchestrator | 2025-06-02 17:55:33.941658 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-02 17:55:33.941664 | orchestrator | Monday 02 June 2025 17:51:39 +0000 (0:00:07.724) 0:00:35.667 *********** 2025-06-02 17:55:33.941670 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:55:33.941676 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:55:33.941682 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:55:33.941687 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:55:33.941693 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:55:33.941699 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:55:33.941704 | orchestrator | 2025-06-02 17:55:33.941710 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-06-02 17:55:33.941716 | orchestrator | Monday 02 June 2025 17:51:39 +0000 (0:00:00.793) 0:00:36.461 *********** 2025-06-02 17:55:33.941723 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:55:33.941729 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:55:33.941736 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:55:33.941742 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:55:33.941748 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:55:33.941754 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:55:33.941759 | orchestrator | 2025-06-02 17:55:33.941766 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-06-02 17:55:33.941772 | orchestrator | Monday 02 June 2025 17:51:42 +0000 (0:00:02.316) 0:00:38.777 *********** 2025-06-02 17:55:33.941779 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:55:33.941785 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:55:33.941792 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:55:33.941798 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:55:33.941805 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:55:33.941811 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:55:33.941817 | orchestrator | 2025-06-02 17:55:33.941825 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-06-02 17:55:33.941832 | orchestrator | Monday 02 June 2025 17:51:43 +0000 (0:00:01.189) 0:00:39.966 *********** 2025-06-02 17:55:33.941838 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:55:33.941844 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:55:33.941850 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:55:33.941857 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:55:33.941863 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:55:33.941869 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:55:33.941875 | orchestrator | 2025-06-02 17:55:33.941882 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-06-02 17:55:33.941888 | orchestrator | Monday 02 June 2025 17:51:45 +0000 (0:00:02.467) 0:00:42.434 *********** 2025-06-02 17:55:33.941898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 17:55:33.941929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 17:55:33.941941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 17:55:33.941949 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 17:55:33.941957 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 17:55:33.941964 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 17:55:33.941976 | orchestrator | 2025-06-02 17:55:33.941982 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-06-02 17:55:33.941988 | orchestrator | Monday 02 June 2025 17:51:49 +0000 (0:00:03.619) 0:00:46.054 *********** 2025-06-02 17:55:33.941994 | orchestrator | [WARNING]: Skipped 2025-06-02 17:55:33.942001 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-06-02 17:55:33.942008 | orchestrator | due to this access issue: 2025-06-02 17:55:33.942064 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-06-02 17:55:33.942072 | orchestrator | a directory 2025-06-02 17:55:33.942080 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 17:55:33.942086 | orchestrator | 2025-06-02 17:55:33.942098 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-02 17:55:33.942105 | orchestrator | Monday 02 June 2025 17:51:50 +0000 (0:00:01.018) 0:00:47.072 *********** 2025-06-02 17:55:33.942113 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:55:33.942120 | orchestrator | 2025-06-02 17:55:33.942126 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-06-02 17:55:33.942133 | orchestrator | Monday 02 June 2025 17:51:52 +0000 (0:00:01.522) 0:00:48.594 *********** 2025-06-02 17:55:33.942144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 17:55:33.942151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 17:55:33.942159 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 17:55:33.942174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 17:55:33.942212 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 17:55:33.942223 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 17:55:33.942230 | orchestrator | 2025-06-02 17:55:33.942236 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-06-02 17:55:33.942243 | orchestrator | Monday 02 June 2025 17:51:57 +0000 (0:00:05.011) 0:00:53.606 *********** 2025-06-02 17:55:33.942250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 17:55:33.942262 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:55:33.942269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 17:55:33.942281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 17:55:33.942288 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:55:33.942295 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:55:33.942311 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:55:33.942318 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:55:33.942325 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:55:33.942332 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:55:33.942338 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:55:33.942350 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:55:33.942356 | orchestrator | 2025-06-02 17:55:33.942363 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-06-02 17:55:33.942369 | orchestrator | Monday 02 June 2025 17:52:00 +0000 (0:00:02.928) 0:00:56.534 *********** 2025-06-02 17:55:33.942376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 17:55:33.942382 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:55:33.942396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 17:55:33.942403 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:55:33.942414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 17:55:33.942421 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:55:33.942429 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:55:33.942441 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:55:33.942448 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:55:33.942455 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:55:33.942462 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:55:33.942469 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:55:33.942476 | orchestrator | 2025-06-02 17:55:33.942484 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-06-02 17:55:33.942494 | orchestrator | Monday 02 June 2025 17:52:03 +0000 (0:00:03.599) 0:01:00.134 *********** 2025-06-02 17:55:33.942501 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:55:33.942508 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:55:33.942515 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:55:33.942522 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:55:33.942529 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:55:33.942535 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:55:33.942542 | orchestrator | 2025-06-02 17:55:33.942548 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-06-02 17:55:33.942555 | orchestrator | Monday 02 June 2025 17:52:07 +0000 (0:00:03.518) 0:01:03.652 *********** 2025-06-02 17:55:33.942562 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:55:33.942569 | orchestrator | 2025-06-02 17:55:33.942575 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-06-02 17:55:33.942581 | orchestrator | Monday 02 June 2025 17:52:07 +0000 (0:00:00.107) 0:01:03.760 *********** 2025-06-02 17:55:33.942587 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:55:33.942593 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:55:33.942601 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:55:33.942607 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:55:33.942614 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:55:33.942621 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:55:33.942628 | orchestrator | 2025-06-02 17:55:33.942635 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-06-02 17:55:33.942642 | orchestrator | Monday 02 June 2025 17:52:07 +0000 (0:00:00.656) 0:01:04.417 *********** 2025-06-02 17:55:33.942655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 17:55:33.942662 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:55:33.942669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 17:55:33.942676 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:55:33.942683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 17:55:33.942689 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:55:33.942702 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:55:33.942709 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:55:33.942755 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:55:33.942769 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:55:33.942776 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:55:33.942783 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:55:33.942790 | orchestrator | 2025-06-02 17:55:33.942796 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-06-02 17:55:33.942802 | orchestrator | Monday 02 June 2025 17:52:10 +0000 (0:00:02.345) 0:01:06.762 *********** 2025-06-02 17:55:33.942808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 17:55:33.942821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 17:55:33.942830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 17:55:33.942842 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 17:55:33.942848 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 17:55:33.942855 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 17:55:33.942861 | orchestrator | 2025-06-02 17:55:33.942866 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-06-02 17:55:33.942872 | orchestrator | Monday 02 June 2025 17:52:14 +0000 (0:00:03.898) 0:01:10.661 *********** 2025-06-02 17:55:33.942884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 17:55:33.942899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 17:55:33.942906 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 17:55:33.942913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 17:55:33.942918 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 17:55:33.942928 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 17:55:33.942937 | orchestrator | 2025-06-02 17:55:33.942944 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-06-02 17:55:33.942951 | orchestrator | Monday 02 June 2025 17:52:21 +0000 (0:00:06.965) 0:01:17.627 *********** 2025-06-02 17:55:33.942961 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:55:33.942968 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:55:33.942975 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:55:33.942982 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:55:33.942988 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:55:33.942995 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:55:33.943002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 17:55:33.943016 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 17:55:33.943033 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 17:55:33.943040 | orchestrator | 2025-06-02 17:55:33.943046 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-06-02 17:55:33.943053 | orchestrator | Monday 02 June 2025 17:52:24 +0000 (0:00:03.163) 0:01:20.791 *********** 2025-06-02 17:55:33.943060 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:55:33.943066 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:55:33.943073 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:55:33.943080 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:55:33.943086 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:55:33.943092 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:55:33.943098 | orchestrator | 2025-06-02 17:55:33.943105 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-06-02 17:55:33.943110 | orchestrator | Monday 02 June 2025 17:52:28 +0000 (0:00:04.222) 0:01:25.013 *********** 2025-06-02 17:55:33.943117 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:55:33.943123 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:55:33.943129 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:55:33.943141 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:55:33.943153 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:55:33.943159 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:55:33.943169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 17:55:33.943176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 17:55:33.943204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 17:55:33.943210 | orchestrator | 2025-06-02 17:55:33.943217 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-06-02 17:55:33.943222 | orchestrator | Monday 02 June 2025 17:52:32 +0000 (0:00:04.044) 0:01:29.058 *********** 2025-06-02 17:55:33.943233 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:55:33.943239 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:55:33.943245 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:55:33.943252 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:55:33.943258 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:55:33.943264 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:55:33.943270 | orchestrator | 2025-06-02 17:55:33.943277 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-06-02 17:55:33.943282 | orchestrator | Monday 02 June 2025 17:52:34 +0000 (0:00:02.377) 0:01:31.436 *********** 2025-06-02 17:55:33.943289 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:55:33.943295 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:55:33.943302 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:55:33.943308 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:55:33.943314 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:55:33.943321 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:55:33.943328 | orchestrator | 2025-06-02 17:55:33.943335 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-06-02 17:55:33.943341 | orchestrator | Monday 02 June 2025 17:52:38 +0000 (0:00:03.477) 0:01:34.913 *********** 2025-06-02 17:55:33.943348 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:55:33.943353 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:55:33.943359 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:55:33.943370 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:55:33.943376 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:55:33.943382 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:55:33.943388 | orchestrator | 2025-06-02 17:55:33.943395 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-06-02 17:55:33.943401 | orchestrator | Monday 02 June 2025 17:52:41 +0000 (0:00:02.675) 0:01:37.589 *********** 2025-06-02 17:55:33.943406 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:55:33.943412 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:55:33.943417 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:55:33.943424 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:55:33.943430 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:55:33.943436 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:55:33.943442 | orchestrator | 2025-06-02 17:55:33.943449 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-06-02 17:55:33.943455 | orchestrator | Monday 02 June 2025 17:52:44 +0000 (0:00:03.555) 0:01:41.145 *********** 2025-06-02 17:55:33.943461 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:55:33.943467 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:55:33.943473 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:55:33.943479 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:55:33.943520 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:55:33.943526 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:55:33.943533 | orchestrator | 2025-06-02 17:55:33.943544 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-06-02 17:55:33.943551 | orchestrator | Monday 02 June 2025 17:52:46 +0000 (0:00:02.238) 0:01:43.383 *********** 2025-06-02 17:55:33.943557 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:55:33.943563 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:55:33.943569 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:55:33.943575 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:55:33.943581 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:55:33.943587 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:55:33.943594 | orchestrator | 2025-06-02 17:55:33.943600 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-06-02 17:55:33.943607 | orchestrator | Monday 02 June 2025 17:52:51 +0000 (0:00:04.281) 0:01:47.664 *********** 2025-06-02 17:55:33.943613 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-02 17:55:33.943626 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:55:33.943633 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-02 17:55:33.943650 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:55:33.943657 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-02 17:55:33.943663 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:55:33.943669 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-02 17:55:33.943675 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:55:33.943682 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-02 17:55:33.943688 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:55:33.943695 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-02 17:55:33.943701 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:55:33.943707 | orchestrator | 2025-06-02 17:55:33.943713 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-06-02 17:55:33.943719 | orchestrator | Monday 02 June 2025 17:52:54 +0000 (0:00:03.711) 0:01:51.375 *********** 2025-06-02 17:55:33.943726 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:55:33.943734 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:55:33.943740 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:55:33.943747 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:55:33.943761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 17:55:33.943769 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:55:33.943780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 17:55:33.943793 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:55:33.943800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 17:55:33.943807 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:55:33.943814 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:55:33.943820 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:55:33.943827 | orchestrator | 2025-06-02 17:55:33.943833 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-06-02 17:55:33.943840 | orchestrator | Monday 02 June 2025 17:52:57 +0000 (0:00:02.355) 0:01:53.731 *********** 2025-06-02 17:55:33.943852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 17:55:33.943859 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:55:33.943869 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:55:33.943881 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:55:33.943888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 17:55:33.943895 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:55:33.943902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 17:55:33.943909 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:55:33.943916 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:55:33.943923 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:55:33.943935 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:55:33.943947 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:55:33.943954 | orchestrator | 2025-06-02 17:55:33.943961 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-06-02 17:55:33.943970 | orchestrator | Monday 02 June 2025 17:52:59 +0000 (0:00:02.767) 0:01:56.498 *********** 2025-06-02 17:55:33.943977 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:55:33.943984 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:55:33.943990 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:55:33.943997 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:55:33.944004 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:55:33.944011 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:55:33.944017 | orchestrator | 2025-06-02 17:55:33.944024 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-06-02 17:55:33.944030 | orchestrator | Monday 02 June 2025 17:53:03 +0000 (0:00:03.605) 0:02:00.104 *********** 2025-06-02 17:55:33.944037 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:55:33.944044 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:55:33.944050 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:55:33.944056 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:55:33.944062 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:55:33.944068 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:55:33.944074 | orchestrator | 2025-06-02 17:55:33.944080 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-06-02 17:55:33.944087 | orchestrator | Monday 02 June 2025 17:53:09 +0000 (0:00:06.259) 0:02:06.363 *********** 2025-06-02 17:55:33.944093 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:55:33.944099 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:55:33.944105 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:55:33.944111 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:55:33.944117 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:55:33.944123 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:55:33.944129 | orchestrator | 2025-06-02 17:55:33.944135 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-06-02 17:55:33.944140 | orchestrator | Monday 02 June 2025 17:53:13 +0000 (0:00:04.116) 0:02:10.480 *********** 2025-06-02 17:55:33.944147 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:55:33.944154 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:55:33.944160 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:55:33.944165 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:55:33.944172 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:55:33.944235 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:55:33.944243 | orchestrator | 2025-06-02 17:55:33.944249 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-06-02 17:55:33.944254 | orchestrator | Monday 02 June 2025 17:53:18 +0000 (0:00:04.257) 0:02:14.737 *********** 2025-06-02 17:55:33.944261 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:55:33.944267 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:55:33.944272 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:55:33.944279 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:55:33.944285 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:55:33.944292 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:55:33.944299 | orchestrator | 2025-06-02 17:55:33.944305 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-06-02 17:55:33.944312 | orchestrator | Monday 02 June 2025 17:53:21 +0000 (0:00:02.914) 0:02:17.652 *********** 2025-06-02 17:55:33.944319 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:55:33.944325 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:55:33.944331 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:55:33.944346 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:55:33.944352 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:55:33.944359 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:55:33.944366 | orchestrator | 2025-06-02 17:55:33.944372 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-06-02 17:55:33.944379 | orchestrator | Monday 02 June 2025 17:53:23 +0000 (0:00:02.399) 0:02:20.051 *********** 2025-06-02 17:55:33.944386 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:55:33.944392 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:55:33.944398 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:55:33.944405 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:55:33.944411 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:55:33.944417 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:55:33.944424 | orchestrator | 2025-06-02 17:55:33.944430 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-06-02 17:55:33.944436 | orchestrator | Monday 02 June 2025 17:53:26 +0000 (0:00:03.022) 0:02:23.074 *********** 2025-06-02 17:55:33.944443 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:55:33.944449 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:55:33.944455 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:55:33.944461 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:55:33.944467 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:55:33.944472 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:55:33.944478 | orchestrator | 2025-06-02 17:55:33.944484 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-06-02 17:55:33.944490 | orchestrator | Monday 02 June 2025 17:53:30 +0000 (0:00:03.647) 0:02:26.721 *********** 2025-06-02 17:55:33.944497 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:55:33.944512 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:55:33.944518 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:55:33.944523 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:55:33.944529 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:55:33.944535 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:55:33.944541 | orchestrator | 2025-06-02 17:55:33.944546 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-06-02 17:55:33.944552 | orchestrator | Monday 02 June 2025 17:53:33 +0000 (0:00:03.287) 0:02:30.008 *********** 2025-06-02 17:55:33.944558 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:55:33.944564 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:55:33.944570 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:55:33.944577 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:55:33.944583 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:55:33.944589 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:55:33.944595 | orchestrator | 2025-06-02 17:55:33.944601 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-06-02 17:55:33.944607 | orchestrator | Monday 02 June 2025 17:53:36 +0000 (0:00:02.626) 0:02:32.634 *********** 2025-06-02 17:55:33.944614 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-02 17:55:33.944620 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:55:33.944633 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-02 17:55:33.944640 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:55:33.944646 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-02 17:55:33.944652 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:55:33.944658 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-02 17:55:33.944665 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:55:33.944671 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-02 17:55:33.944677 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:55:33.944688 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-02 17:55:33.944694 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:55:33.944701 | orchestrator | 2025-06-02 17:55:33.944707 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-06-02 17:55:33.944714 | orchestrator | Monday 02 June 2025 17:53:38 +0000 (0:00:02.806) 0:02:35.440 *********** 2025-06-02 17:55:33.944723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 17:55:33.944732 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:55:33.944738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 17:55:33.944744 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:55:33.944759 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:55:33.944766 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:55:33.944781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 17:55:33.944794 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:55:33.944800 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:55:33.944805 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:55:33.944810 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:55:33.944816 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:55:33.944821 | orchestrator | 2025-06-02 17:55:33.944827 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-06-02 17:55:33.944833 | orchestrator | Monday 02 June 2025 17:53:41 +0000 (0:00:02.325) 0:02:37.766 *********** 2025-06-02 17:55:33.944840 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 17:55:33.944853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 17:55:33.944865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 17:55:33.944878 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 17:55:33.944886 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 17:55:33.944893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 17:55:33.944900 | orchestrator | 2025-06-02 17:55:33.944907 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-02 17:55:33.944917 | orchestrator | Monday 02 June 2025 17:53:44 +0000 (0:00:03.212) 0:02:40.978 *********** 2025-06-02 17:55:33.944923 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:55:33.944930 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:55:33.944936 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:55:33.944943 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:55:33.944950 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:55:33.944956 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:55:33.944963 | orchestrator | 2025-06-02 17:55:33.944970 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-06-02 17:55:33.944981 | orchestrator | Monday 02 June 2025 17:53:45 +0000 (0:00:00.626) 0:02:41.605 *********** 2025-06-02 17:55:33.944987 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:55:33.944993 | orchestrator | 2025-06-02 17:55:33.944999 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-06-02 17:55:33.945006 | orchestrator | Monday 02 June 2025 17:53:47 +0000 (0:00:02.079) 0:02:43.685 *********** 2025-06-02 17:55:33.945011 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:55:33.945018 | orchestrator | 2025-06-02 17:55:33.945024 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-06-02 17:55:33.945030 | orchestrator | Monday 02 June 2025 17:53:49 +0000 (0:00:02.132) 0:02:45.817 *********** 2025-06-02 17:55:33.945036 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:55:33.945042 | orchestrator | 2025-06-02 17:55:33.945053 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-02 17:55:33.945060 | orchestrator | Monday 02 June 2025 17:54:35 +0000 (0:00:46.337) 0:03:32.155 *********** 2025-06-02 17:55:33.945066 | orchestrator | 2025-06-02 17:55:33.945072 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-02 17:55:33.945078 | orchestrator | Monday 02 June 2025 17:54:35 +0000 (0:00:00.061) 0:03:32.217 *********** 2025-06-02 17:55:33.945085 | orchestrator | 2025-06-02 17:55:33.945091 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-02 17:55:33.945098 | orchestrator | Monday 02 June 2025 17:54:35 +0000 (0:00:00.196) 0:03:32.413 *********** 2025-06-02 17:55:33.945104 | orchestrator | 2025-06-02 17:55:33.945111 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-02 17:55:33.945116 | orchestrator | Monday 02 June 2025 17:54:35 +0000 (0:00:00.063) 0:03:32.477 *********** 2025-06-02 17:55:33.945122 | orchestrator | 2025-06-02 17:55:33.945129 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-02 17:55:33.945135 | orchestrator | Monday 02 June 2025 17:54:36 +0000 (0:00:00.060) 0:03:32.537 *********** 2025-06-02 17:55:33.945140 | orchestrator | 2025-06-02 17:55:33.945147 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-02 17:55:33.945154 | orchestrator | Monday 02 June 2025 17:54:36 +0000 (0:00:00.071) 0:03:32.608 *********** 2025-06-02 17:55:33.945161 | orchestrator | 2025-06-02 17:55:33.945167 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-06-02 17:55:33.945173 | orchestrator | Monday 02 June 2025 17:54:36 +0000 (0:00:00.066) 0:03:32.675 *********** 2025-06-02 17:55:33.945204 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:55:33.945211 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:55:33.945217 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:55:33.945223 | orchestrator | 2025-06-02 17:55:33.945230 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-06-02 17:55:33.945236 | orchestrator | Monday 02 June 2025 17:55:04 +0000 (0:00:27.844) 0:04:00.520 *********** 2025-06-02 17:55:33.945241 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:55:33.945247 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:55:33.945253 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:55:33.945259 | orchestrator | 2025-06-02 17:55:33.945265 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:55:33.945274 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-02 17:55:33.945281 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-06-02 17:55:33.945287 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-06-02 17:55:33.945294 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-06-02 17:55:33.945307 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-06-02 17:55:33.945314 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-06-02 17:55:33.945320 | orchestrator | 2025-06-02 17:55:33.945326 | orchestrator | 2025-06-02 17:55:33.945332 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:55:33.945339 | orchestrator | Monday 02 June 2025 17:55:30 +0000 (0:00:26.667) 0:04:27.187 *********** 2025-06-02 17:55:33.945345 | orchestrator | =============================================================================== 2025-06-02 17:55:33.945350 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 46.34s 2025-06-02 17:55:33.945356 | orchestrator | neutron : Restart neutron-server container ----------------------------- 27.84s 2025-06-02 17:55:33.945362 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 26.67s 2025-06-02 17:55:33.945369 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.72s 2025-06-02 17:55:33.945382 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 6.97s 2025-06-02 17:55:33.945388 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.41s 2025-06-02 17:55:33.945395 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 6.26s 2025-06-02 17:55:33.945401 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 5.01s 2025-06-02 17:55:33.945408 | orchestrator | neutron : Copying over dhcp_agent.ini ----------------------------------- 4.28s 2025-06-02 17:55:33.945414 | orchestrator | neutron : Copying over metering_agent.ini ------------------------------- 4.26s 2025-06-02 17:55:33.945421 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 4.22s 2025-06-02 17:55:33.945428 | orchestrator | neutron : Copying over neutron_ovn_vpn_agent.ini ------------------------ 4.12s 2025-06-02 17:55:33.945435 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.04s 2025-06-02 17:55:33.945441 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.90s 2025-06-02 17:55:33.945448 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.82s 2025-06-02 17:55:33.945454 | orchestrator | neutron : Copying over dnsmasq.conf ------------------------------------- 3.71s 2025-06-02 17:55:33.945467 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.68s 2025-06-02 17:55:33.945474 | orchestrator | neutron : Copying over nsx.ini ------------------------------------------ 3.65s 2025-06-02 17:55:33.945481 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 3.62s 2025-06-02 17:55:33.945487 | orchestrator | neutron : Copying over metadata_agent.ini ------------------------------- 3.61s 2025-06-02 17:55:33.945647 | orchestrator | 2025-06-02 17:55:33 | INFO  | Task 1c77fa8b-2b3e-4daf-89f2-22e8c3dc6560 is in state STARTED 2025-06-02 17:55:33.945659 | orchestrator | 2025-06-02 17:55:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:55:36.997063 | orchestrator | 2025-06-02 17:55:36 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:55:36.998121 | orchestrator | 2025-06-02 17:55:36 | INFO  | Task 9082ddf3-cd2a-48cd-9d2d-2c2b5ce2dfc9 is in state STARTED 2025-06-02 17:55:37.000303 | orchestrator | 2025-06-02 17:55:36 | INFO  | Task 6ea8524f-037b-4db1-b3fe-758883a7c58b is in state STARTED 2025-06-02 17:55:37.001032 | orchestrator | 2025-06-02 17:55:37 | INFO  | Task 1c77fa8b-2b3e-4daf-89f2-22e8c3dc6560 is in state STARTED 2025-06-02 17:55:37.001377 | orchestrator | 2025-06-02 17:55:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:55:40.069490 | orchestrator | 2025-06-02 17:55:40 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:55:40.073476 | orchestrator | 2025-06-02 17:55:40 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:55:40.075889 | orchestrator | 2025-06-02 17:55:40 | INFO  | Task 9082ddf3-cd2a-48cd-9d2d-2c2b5ce2dfc9 is in state SUCCESS 2025-06-02 17:55:40.078976 | orchestrator | 2025-06-02 17:55:40 | INFO  | Task 6ea8524f-037b-4db1-b3fe-758883a7c58b is in state STARTED 2025-06-02 17:55:40.081554 | orchestrator | 2025-06-02 17:55:40 | INFO  | Task 1c77fa8b-2b3e-4daf-89f2-22e8c3dc6560 is in state STARTED 2025-06-02 17:55:40.081807 | orchestrator | 2025-06-02 17:55:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:55:43.125388 | orchestrator | 2025-06-02 17:55:43 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:55:43.125467 | orchestrator | 2025-06-02 17:55:43 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:55:43.125907 | orchestrator | 2025-06-02 17:55:43 | INFO  | Task 6ea8524f-037b-4db1-b3fe-758883a7c58b is in state STARTED 2025-06-02 17:55:43.127957 | orchestrator | 2025-06-02 17:55:43 | INFO  | Task 1c77fa8b-2b3e-4daf-89f2-22e8c3dc6560 is in state STARTED 2025-06-02 17:55:43.128001 | orchestrator | 2025-06-02 17:55:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:55:46.176958 | orchestrator | 2025-06-02 17:55:46 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:55:46.177125 | orchestrator | 2025-06-02 17:55:46 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:55:46.177251 | orchestrator | 2025-06-02 17:55:46 | INFO  | Task 6ea8524f-037b-4db1-b3fe-758883a7c58b is in state STARTED 2025-06-02 17:55:46.183082 | orchestrator | 2025-06-02 17:55:46 | INFO  | Task 1c77fa8b-2b3e-4daf-89f2-22e8c3dc6560 is in state STARTED 2025-06-02 17:55:46.183170 | orchestrator | 2025-06-02 17:55:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:55:49.230714 | orchestrator | 2025-06-02 17:55:49 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:55:49.233023 | orchestrator | 2025-06-02 17:55:49 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:55:49.234515 | orchestrator | 2025-06-02 17:55:49 | INFO  | Task 6ea8524f-037b-4db1-b3fe-758883a7c58b is in state STARTED 2025-06-02 17:55:49.235916 | orchestrator | 2025-06-02 17:55:49 | INFO  | Task 1c77fa8b-2b3e-4daf-89f2-22e8c3dc6560 is in state STARTED 2025-06-02 17:55:49.235969 | orchestrator | 2025-06-02 17:55:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:55:52.287365 | orchestrator | 2025-06-02 17:55:52 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:55:52.288301 | orchestrator | 2025-06-02 17:55:52 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:55:52.291950 | orchestrator | 2025-06-02 17:55:52 | INFO  | Task 6ea8524f-037b-4db1-b3fe-758883a7c58b is in state STARTED 2025-06-02 17:55:52.292019 | orchestrator | 2025-06-02 17:55:52 | INFO  | Task 1c77fa8b-2b3e-4daf-89f2-22e8c3dc6560 is in state STARTED 2025-06-02 17:55:52.292048 | orchestrator | 2025-06-02 17:55:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:55:55.339142 | orchestrator | 2025-06-02 17:55:55 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:55:55.341554 | orchestrator | 2025-06-02 17:55:55 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:55:55.344610 | orchestrator | 2025-06-02 17:55:55 | INFO  | Task 6ea8524f-037b-4db1-b3fe-758883a7c58b is in state STARTED 2025-06-02 17:55:55.346388 | orchestrator | 2025-06-02 17:55:55 | INFO  | Task 1c77fa8b-2b3e-4daf-89f2-22e8c3dc6560 is in state STARTED 2025-06-02 17:55:55.346464 | orchestrator | 2025-06-02 17:55:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:55:58.389741 | orchestrator | 2025-06-02 17:55:58 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:55:58.390558 | orchestrator | 2025-06-02 17:55:58 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:55:58.392154 | orchestrator | 2025-06-02 17:55:58 | INFO  | Task 6ea8524f-037b-4db1-b3fe-758883a7c58b is in state STARTED 2025-06-02 17:55:58.393937 | orchestrator | 2025-06-02 17:55:58 | INFO  | Task 1c77fa8b-2b3e-4daf-89f2-22e8c3dc6560 is in state STARTED 2025-06-02 17:55:58.394002 | orchestrator | 2025-06-02 17:55:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:56:01.452536 | orchestrator | 2025-06-02 17:56:01 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:56:01.452645 | orchestrator | 2025-06-02 17:56:01 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:56:01.455115 | orchestrator | 2025-06-02 17:56:01 | INFO  | Task 6ea8524f-037b-4db1-b3fe-758883a7c58b is in state STARTED 2025-06-02 17:56:01.455892 | orchestrator | 2025-06-02 17:56:01 | INFO  | Task 1c77fa8b-2b3e-4daf-89f2-22e8c3dc6560 is in state STARTED 2025-06-02 17:56:01.455967 | orchestrator | 2025-06-02 17:56:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:56:04.487608 | orchestrator | 2025-06-02 17:56:04 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:56:04.488713 | orchestrator | 2025-06-02 17:56:04 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:56:04.490473 | orchestrator | 2025-06-02 17:56:04 | INFO  | Task 6ea8524f-037b-4db1-b3fe-758883a7c58b is in state STARTED 2025-06-02 17:56:04.491105 | orchestrator | 2025-06-02 17:56:04 | INFO  | Task 1c77fa8b-2b3e-4daf-89f2-22e8c3dc6560 is in state STARTED 2025-06-02 17:56:04.491145 | orchestrator | 2025-06-02 17:56:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:56:07.533523 | orchestrator | 2025-06-02 17:56:07 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:56:07.534436 | orchestrator | 2025-06-02 17:56:07 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:56:07.536725 | orchestrator | 2025-06-02 17:56:07 | INFO  | Task 6ea8524f-037b-4db1-b3fe-758883a7c58b is in state STARTED 2025-06-02 17:56:07.538220 | orchestrator | 2025-06-02 17:56:07 | INFO  | Task 1c77fa8b-2b3e-4daf-89f2-22e8c3dc6560 is in state STARTED 2025-06-02 17:56:07.538262 | orchestrator | 2025-06-02 17:56:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:56:10.592593 | orchestrator | 2025-06-02 17:56:10 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:56:10.594191 | orchestrator | 2025-06-02 17:56:10 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:56:10.596289 | orchestrator | 2025-06-02 17:56:10 | INFO  | Task 6ea8524f-037b-4db1-b3fe-758883a7c58b is in state STARTED 2025-06-02 17:56:10.598328 | orchestrator | 2025-06-02 17:56:10 | INFO  | Task 1c77fa8b-2b3e-4daf-89f2-22e8c3dc6560 is in state STARTED 2025-06-02 17:56:10.598494 | orchestrator | 2025-06-02 17:56:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:56:13.639755 | orchestrator | 2025-06-02 17:56:13 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:56:13.640785 | orchestrator | 2025-06-02 17:56:13 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:56:13.642763 | orchestrator | 2025-06-02 17:56:13 | INFO  | Task 6ea8524f-037b-4db1-b3fe-758883a7c58b is in state STARTED 2025-06-02 17:56:13.644762 | orchestrator | 2025-06-02 17:56:13 | INFO  | Task 1c77fa8b-2b3e-4daf-89f2-22e8c3dc6560 is in state STARTED 2025-06-02 17:56:13.644869 | orchestrator | 2025-06-02 17:56:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:56:16.691813 | orchestrator | 2025-06-02 17:56:16 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:56:16.693240 | orchestrator | 2025-06-02 17:56:16 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:56:16.694859 | orchestrator | 2025-06-02 17:56:16 | INFO  | Task 6ea8524f-037b-4db1-b3fe-758883a7c58b is in state STARTED 2025-06-02 17:56:16.696564 | orchestrator | 2025-06-02 17:56:16 | INFO  | Task 1c77fa8b-2b3e-4daf-89f2-22e8c3dc6560 is in state STARTED 2025-06-02 17:56:16.696606 | orchestrator | 2025-06-02 17:56:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:56:19.743660 | orchestrator | 2025-06-02 17:56:19 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:56:19.744820 | orchestrator | 2025-06-02 17:56:19 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:56:19.746489 | orchestrator | 2025-06-02 17:56:19 | INFO  | Task 6ea8524f-037b-4db1-b3fe-758883a7c58b is in state STARTED 2025-06-02 17:56:19.748251 | orchestrator | 2025-06-02 17:56:19 | INFO  | Task 1c77fa8b-2b3e-4daf-89f2-22e8c3dc6560 is in state STARTED 2025-06-02 17:56:19.748564 | orchestrator | 2025-06-02 17:56:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:56:22.787357 | orchestrator | 2025-06-02 17:56:22 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:56:22.789175 | orchestrator | 2025-06-02 17:56:22 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:56:22.791326 | orchestrator | 2025-06-02 17:56:22 | INFO  | Task 6ea8524f-037b-4db1-b3fe-758883a7c58b is in state STARTED 2025-06-02 17:56:22.794332 | orchestrator | 2025-06-02 17:56:22 | INFO  | Task 1c77fa8b-2b3e-4daf-89f2-22e8c3dc6560 is in state STARTED 2025-06-02 17:56:22.794463 | orchestrator | 2025-06-02 17:56:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:56:25.839783 | orchestrator | 2025-06-02 17:56:25 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:56:25.841168 | orchestrator | 2025-06-02 17:56:25 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:56:25.844473 | orchestrator | 2025-06-02 17:56:25 | INFO  | Task 6ea8524f-037b-4db1-b3fe-758883a7c58b is in state STARTED 2025-06-02 17:56:25.846918 | orchestrator | 2025-06-02 17:56:25 | INFO  | Task 1c77fa8b-2b3e-4daf-89f2-22e8c3dc6560 is in state STARTED 2025-06-02 17:56:25.846974 | orchestrator | 2025-06-02 17:56:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:56:28.893665 | orchestrator | 2025-06-02 17:56:28 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:56:28.895329 | orchestrator | 2025-06-02 17:56:28 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:56:28.897162 | orchestrator | 2025-06-02 17:56:28 | INFO  | Task 6ea8524f-037b-4db1-b3fe-758883a7c58b is in state STARTED 2025-06-02 17:56:28.899390 | orchestrator | 2025-06-02 17:56:28 | INFO  | Task 1c77fa8b-2b3e-4daf-89f2-22e8c3dc6560 is in state STARTED 2025-06-02 17:56:28.899440 | orchestrator | 2025-06-02 17:56:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:56:31.940558 | orchestrator | 2025-06-02 17:56:31 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:56:31.940651 | orchestrator | 2025-06-02 17:56:31 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:56:31.940661 | orchestrator | 2025-06-02 17:56:31 | INFO  | Task 6ea8524f-037b-4db1-b3fe-758883a7c58b is in state STARTED 2025-06-02 17:56:31.941595 | orchestrator | 2025-06-02 17:56:31 | INFO  | Task 1c77fa8b-2b3e-4daf-89f2-22e8c3dc6560 is in state STARTED 2025-06-02 17:56:31.941633 | orchestrator | 2025-06-02 17:56:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:56:34.991641 | orchestrator | 2025-06-02 17:56:34 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:56:34.993039 | orchestrator | 2025-06-02 17:56:34 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:56:34.994966 | orchestrator | 2025-06-02 17:56:34 | INFO  | Task 6ea8524f-037b-4db1-b3fe-758883a7c58b is in state STARTED 2025-06-02 17:56:34.995967 | orchestrator | 2025-06-02 17:56:34 | INFO  | Task 1c77fa8b-2b3e-4daf-89f2-22e8c3dc6560 is in state STARTED 2025-06-02 17:56:34.996131 | orchestrator | 2025-06-02 17:56:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:56:38.046059 | orchestrator | 2025-06-02 17:56:38 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:56:38.049842 | orchestrator | 2025-06-02 17:56:38 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:56:38.052859 | orchestrator | 2025-06-02 17:56:38 | INFO  | Task 6ea8524f-037b-4db1-b3fe-758883a7c58b is in state STARTED 2025-06-02 17:56:38.055654 | orchestrator | 2025-06-02 17:56:38 | INFO  | Task 1c77fa8b-2b3e-4daf-89f2-22e8c3dc6560 is in state STARTED 2025-06-02 17:56:38.055702 | orchestrator | 2025-06-02 17:56:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:56:41.099266 | orchestrator | 2025-06-02 17:56:41 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:56:41.100975 | orchestrator | 2025-06-02 17:56:41 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:56:41.102689 | orchestrator | 2025-06-02 17:56:41 | INFO  | Task 6ea8524f-037b-4db1-b3fe-758883a7c58b is in state STARTED 2025-06-02 17:56:41.104349 | orchestrator | 2025-06-02 17:56:41 | INFO  | Task 1c77fa8b-2b3e-4daf-89f2-22e8c3dc6560 is in state STARTED 2025-06-02 17:56:41.104377 | orchestrator | 2025-06-02 17:56:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:56:44.142137 | orchestrator | 2025-06-02 17:56:44 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:56:44.144026 | orchestrator | 2025-06-02 17:56:44 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:56:44.146453 | orchestrator | 2025-06-02 17:56:44 | INFO  | Task 6ea8524f-037b-4db1-b3fe-758883a7c58b is in state STARTED 2025-06-02 17:56:44.147966 | orchestrator | 2025-06-02 17:56:44 | INFO  | Task 1c77fa8b-2b3e-4daf-89f2-22e8c3dc6560 is in state STARTED 2025-06-02 17:56:44.148000 | orchestrator | 2025-06-02 17:56:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:56:47.187110 | orchestrator | 2025-06-02 17:56:47 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:56:47.188332 | orchestrator | 2025-06-02 17:56:47 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:56:47.189605 | orchestrator | 2025-06-02 17:56:47 | INFO  | Task 6ea8524f-037b-4db1-b3fe-758883a7c58b is in state STARTED 2025-06-02 17:56:47.190950 | orchestrator | 2025-06-02 17:56:47 | INFO  | Task 1c77fa8b-2b3e-4daf-89f2-22e8c3dc6560 is in state STARTED 2025-06-02 17:56:47.191025 | orchestrator | 2025-06-02 17:56:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:56:50.232771 | orchestrator | 2025-06-02 17:56:50 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:56:50.233741 | orchestrator | 2025-06-02 17:56:50 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:56:50.235020 | orchestrator | 2025-06-02 17:56:50 | INFO  | Task 6ea8524f-037b-4db1-b3fe-758883a7c58b is in state STARTED 2025-06-02 17:56:50.236504 | orchestrator | 2025-06-02 17:56:50 | INFO  | Task 1c77fa8b-2b3e-4daf-89f2-22e8c3dc6560 is in state STARTED 2025-06-02 17:56:50.236543 | orchestrator | 2025-06-02 17:56:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:56:53.275577 | orchestrator | 2025-06-02 17:56:53 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:56:53.276740 | orchestrator | 2025-06-02 17:56:53 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:56:53.277638 | orchestrator | 2025-06-02 17:56:53 | INFO  | Task 6ea8524f-037b-4db1-b3fe-758883a7c58b is in state STARTED 2025-06-02 17:56:53.279657 | orchestrator | 2025-06-02 17:56:53 | INFO  | Task 1c77fa8b-2b3e-4daf-89f2-22e8c3dc6560 is in state STARTED 2025-06-02 17:56:53.279699 | orchestrator | 2025-06-02 17:56:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:56:56.315485 | orchestrator | 2025-06-02 17:56:56 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:56:56.316849 | orchestrator | 2025-06-02 17:56:56 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:56:56.317610 | orchestrator | 2025-06-02 17:56:56 | INFO  | Task 6ea8524f-037b-4db1-b3fe-758883a7c58b is in state STARTED 2025-06-02 17:56:56.318522 | orchestrator | 2025-06-02 17:56:56 | INFO  | Task 1c77fa8b-2b3e-4daf-89f2-22e8c3dc6560 is in state STARTED 2025-06-02 17:56:56.318551 | orchestrator | 2025-06-02 17:56:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:56:59.343610 | orchestrator | 2025-06-02 17:56:59 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:56:59.343729 | orchestrator | 2025-06-02 17:56:59 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:56:59.343740 | orchestrator | 2025-06-02 17:56:59 | INFO  | Task 6ea8524f-037b-4db1-b3fe-758883a7c58b is in state STARTED 2025-06-02 17:56:59.344489 | orchestrator | 2025-06-02 17:56:59 | INFO  | Task 1c77fa8b-2b3e-4daf-89f2-22e8c3dc6560 is in state STARTED 2025-06-02 17:56:59.344550 | orchestrator | 2025-06-02 17:56:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:57:02.365848 | orchestrator | 2025-06-02 17:57:02 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:57:02.366419 | orchestrator | 2025-06-02 17:57:02 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:57:02.367138 | orchestrator | 2025-06-02 17:57:02 | INFO  | Task 6ea8524f-037b-4db1-b3fe-758883a7c58b is in state STARTED 2025-06-02 17:57:02.367710 | orchestrator | 2025-06-02 17:57:02 | INFO  | Task 1c77fa8b-2b3e-4daf-89f2-22e8c3dc6560 is in state STARTED 2025-06-02 17:57:02.367791 | orchestrator | 2025-06-02 17:57:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:57:05.395815 | orchestrator | 2025-06-02 17:57:05 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:57:05.396933 | orchestrator | 2025-06-02 17:57:05 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:57:05.398081 | orchestrator | 2025-06-02 17:57:05 | INFO  | Task 6ea8524f-037b-4db1-b3fe-758883a7c58b is in state STARTED 2025-06-02 17:57:05.399373 | orchestrator | 2025-06-02 17:57:05 | INFO  | Task 1c77fa8b-2b3e-4daf-89f2-22e8c3dc6560 is in state STARTED 2025-06-02 17:57:05.399417 | orchestrator | 2025-06-02 17:57:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:57:08.454929 | orchestrator | 2025-06-02 17:57:08 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:57:08.456466 | orchestrator | 2025-06-02 17:57:08 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:57:08.458624 | orchestrator | 2025-06-02 17:57:08 | INFO  | Task 6ea8524f-037b-4db1-b3fe-758883a7c58b is in state STARTED 2025-06-02 17:57:08.461967 | orchestrator | 2025-06-02 17:57:08 | INFO  | Task 1c77fa8b-2b3e-4daf-89f2-22e8c3dc6560 is in state STARTED 2025-06-02 17:57:08.462053 | orchestrator | 2025-06-02 17:57:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:57:11.514199 | orchestrator | 2025-06-02 17:57:11 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:57:11.515948 | orchestrator | 2025-06-02 17:57:11 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:57:11.518204 | orchestrator | 2025-06-02 17:57:11 | INFO  | Task 6ea8524f-037b-4db1-b3fe-758883a7c58b is in state STARTED 2025-06-02 17:57:11.520624 | orchestrator | 2025-06-02 17:57:11 | INFO  | Task 1c77fa8b-2b3e-4daf-89f2-22e8c3dc6560 is in state STARTED 2025-06-02 17:57:11.520844 | orchestrator | 2025-06-02 17:57:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:57:14.564721 | orchestrator | 2025-06-02 17:57:14 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:57:14.565495 | orchestrator | 2025-06-02 17:57:14 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:57:14.567488 | orchestrator | 2025-06-02 17:57:14 | INFO  | Task 6ea8524f-037b-4db1-b3fe-758883a7c58b is in state SUCCESS 2025-06-02 17:57:14.568908 | orchestrator | 2025-06-02 17:57:14.568952 | orchestrator | 2025-06-02 17:57:14.568962 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 17:57:14.568971 | orchestrator | 2025-06-02 17:57:14.568979 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 17:57:14.568987 | orchestrator | Monday 02 June 2025 17:55:35 +0000 (0:00:00.177) 0:00:00.177 *********** 2025-06-02 17:57:14.569064 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:57:14.569083 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:57:14.569098 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:57:14.569113 | orchestrator | 2025-06-02 17:57:14.569127 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 17:57:14.569142 | orchestrator | Monday 02 June 2025 17:55:36 +0000 (0:00:00.293) 0:00:00.471 *********** 2025-06-02 17:57:14.569152 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-06-02 17:57:14.569164 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-06-02 17:57:14.569178 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-06-02 17:57:14.569192 | orchestrator | 2025-06-02 17:57:14.569269 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-06-02 17:57:14.569286 | orchestrator | 2025-06-02 17:57:14.569294 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-06-02 17:57:14.569302 | orchestrator | Monday 02 June 2025 17:55:36 +0000 (0:00:00.646) 0:00:01.118 *********** 2025-06-02 17:57:14.569310 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:57:14.569318 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:57:14.569326 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:57:14.569334 | orchestrator | 2025-06-02 17:57:14.569351 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:57:14.569377 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:57:14.569387 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:57:14.569395 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:57:14.569403 | orchestrator | 2025-06-02 17:57:14.569411 | orchestrator | 2025-06-02 17:57:14.569419 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:57:14.569427 | orchestrator | Monday 02 June 2025 17:55:37 +0000 (0:00:00.725) 0:00:01.844 *********** 2025-06-02 17:57:14.569435 | orchestrator | =============================================================================== 2025-06-02 17:57:14.569443 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.73s 2025-06-02 17:57:14.569451 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.65s 2025-06-02 17:57:14.569459 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2025-06-02 17:57:14.569467 | orchestrator | 2025-06-02 17:57:14.569474 | orchestrator | 2025-06-02 17:57:14.569482 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 17:57:14.569490 | orchestrator | 2025-06-02 17:57:14.569498 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 17:57:14.569506 | orchestrator | Monday 02 June 2025 17:55:11 +0000 (0:00:00.203) 0:00:00.203 *********** 2025-06-02 17:57:14.569514 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:57:14.569521 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:57:14.569529 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:57:14.569537 | orchestrator | 2025-06-02 17:57:14.569545 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 17:57:14.569553 | orchestrator | Monday 02 June 2025 17:55:11 +0000 (0:00:00.289) 0:00:00.492 *********** 2025-06-02 17:57:14.569561 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-06-02 17:57:14.569569 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-06-02 17:57:14.569577 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-06-02 17:57:14.569585 | orchestrator | 2025-06-02 17:57:14.569592 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-06-02 17:57:14.569600 | orchestrator | 2025-06-02 17:57:14.569608 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-06-02 17:57:14.569616 | orchestrator | Monday 02 June 2025 17:55:12 +0000 (0:00:00.365) 0:00:00.857 *********** 2025-06-02 17:57:14.569624 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:57:14.569632 | orchestrator | 2025-06-02 17:57:14.569639 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-06-02 17:57:14.569648 | orchestrator | Monday 02 June 2025 17:55:12 +0000 (0:00:00.498) 0:00:01.356 *********** 2025-06-02 17:57:14.569656 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-06-02 17:57:14.569664 | orchestrator | 2025-06-02 17:57:14.569672 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-06-02 17:57:14.569680 | orchestrator | Monday 02 June 2025 17:55:16 +0000 (0:00:04.131) 0:00:05.488 *********** 2025-06-02 17:57:14.569687 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-06-02 17:57:14.569695 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-06-02 17:57:14.569703 | orchestrator | 2025-06-02 17:57:14.569711 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-06-02 17:57:14.569719 | orchestrator | Monday 02 June 2025 17:55:23 +0000 (0:00:06.501) 0:00:11.990 *********** 2025-06-02 17:57:14.569727 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 17:57:14.569740 | orchestrator | 2025-06-02 17:57:14.569748 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-06-02 17:57:14.569756 | orchestrator | Monday 02 June 2025 17:55:26 +0000 (0:00:03.429) 0:00:15.419 *********** 2025-06-02 17:57:14.569775 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 17:57:14.569784 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-06-02 17:57:14.569797 | orchestrator | 2025-06-02 17:57:14.569810 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-06-02 17:57:14.569823 | orchestrator | Monday 02 June 2025 17:55:30 +0000 (0:00:04.000) 0:00:19.420 *********** 2025-06-02 17:57:14.569835 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 17:57:14.569843 | orchestrator | 2025-06-02 17:57:14.569851 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-06-02 17:57:14.569859 | orchestrator | Monday 02 June 2025 17:55:34 +0000 (0:00:03.598) 0:00:23.018 *********** 2025-06-02 17:57:14.569867 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-06-02 17:57:14.569875 | orchestrator | 2025-06-02 17:57:14.569882 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-06-02 17:57:14.569890 | orchestrator | Monday 02 June 2025 17:55:38 +0000 (0:00:04.082) 0:00:27.101 *********** 2025-06-02 17:57:14.569898 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:57:14.569906 | orchestrator | 2025-06-02 17:57:14.569914 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-06-02 17:57:14.569922 | orchestrator | Monday 02 June 2025 17:55:42 +0000 (0:00:03.490) 0:00:30.592 *********** 2025-06-02 17:57:14.569930 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:57:14.569937 | orchestrator | 2025-06-02 17:57:14.569945 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-06-02 17:57:14.569958 | orchestrator | Monday 02 June 2025 17:55:46 +0000 (0:00:04.122) 0:00:34.715 *********** 2025-06-02 17:57:14.569966 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:57:14.569974 | orchestrator | 2025-06-02 17:57:14.569982 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-06-02 17:57:14.569990 | orchestrator | Monday 02 June 2025 17:55:49 +0000 (0:00:03.658) 0:00:38.373 *********** 2025-06-02 17:57:14.570000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 17:57:14.570101 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 17:57:14.570120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 17:57:14.570137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:14.570150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:14.570158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:14.570167 | orchestrator | 2025-06-02 17:57:14.570175 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-06-02 17:57:14.570183 | orchestrator | Monday 02 June 2025 17:55:51 +0000 (0:00:01.670) 0:00:40.044 *********** 2025-06-02 17:57:14.570191 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:14.570199 | orchestrator | 2025-06-02 17:57:14.570225 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-06-02 17:57:14.570234 | orchestrator | Monday 02 June 2025 17:55:51 +0000 (0:00:00.138) 0:00:40.183 *********** 2025-06-02 17:57:14.570242 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:14.570255 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:14.570263 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:14.570271 | orchestrator | 2025-06-02 17:57:14.570279 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-06-02 17:57:14.570287 | orchestrator | Monday 02 June 2025 17:55:52 +0000 (0:00:00.525) 0:00:40.708 *********** 2025-06-02 17:57:14.570295 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 17:57:14.570303 | orchestrator | 2025-06-02 17:57:14.570311 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-06-02 17:57:14.570319 | orchestrator | Monday 02 June 2025 17:55:53 +0000 (0:00:00.920) 0:00:41.629 *********** 2025-06-02 17:57:14.570327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 17:57:14.570342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 17:57:14.570354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 17:57:14.570363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:14.570376 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:14.570384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:14.570392 | orchestrator | 2025-06-02 17:57:14.570400 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-06-02 17:57:14.570412 | orchestrator | Monday 02 June 2025 17:55:55 +0000 (0:00:02.446) 0:00:44.075 *********** 2025-06-02 17:57:14.570420 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:57:14.570428 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:57:14.570436 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:57:14.570444 | orchestrator | 2025-06-02 17:57:14.570452 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-06-02 17:57:14.570460 | orchestrator | Monday 02 June 2025 17:55:55 +0000 (0:00:00.335) 0:00:44.410 *********** 2025-06-02 17:57:14.570468 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:57:14.570476 | orchestrator | 2025-06-02 17:57:14.570484 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-06-02 17:57:14.570492 | orchestrator | Monday 02 June 2025 17:55:56 +0000 (0:00:00.752) 0:00:45.163 *********** 2025-06-02 17:57:14.570503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 17:57:14.570512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 17:57:14.570525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 17:57:14.570533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:14.570546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:14.570558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:14.570567 | orchestrator | 2025-06-02 17:57:14.570575 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-06-02 17:57:14.570590 | orchestrator | Monday 02 June 2025 17:55:58 +0000 (0:00:02.273) 0:00:47.436 *********** 2025-06-02 17:57:14.570599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 17:57:14.570607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:57:14.570616 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:14.570629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 17:57:14.570637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:57:14.570645 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:14.570657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 17:57:14.570675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:57:14.570683 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:14.570691 | orchestrator | 2025-06-02 17:57:14.570699 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-06-02 17:57:14.570707 | orchestrator | Monday 02 June 2025 17:55:59 +0000 (0:00:00.697) 0:00:48.134 *********** 2025-06-02 17:57:14.570715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 17:57:14.570729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:57:14.570738 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:14.570750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 17:57:14.570763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:57:14.570771 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:14.570779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 17:57:14.570788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:57:14.570796 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:14.570804 | orchestrator | 2025-06-02 17:57:14.570812 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-06-02 17:57:14.570820 | orchestrator | Monday 02 June 2025 17:56:00 +0000 (0:00:01.316) 0:00:49.450 *********** 2025-06-02 17:57:14.570834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 17:57:14.570851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 17:57:14.570860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 17:57:14.570868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:14.570881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:14.570889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:14.570903 | orchestrator | 2025-06-02 17:57:14.570911 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-06-02 17:57:14.570918 | orchestrator | Monday 02 June 2025 17:56:03 +0000 (0:00:02.547) 0:00:51.997 *********** 2025-06-02 17:57:14.570930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 17:57:14.570939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 17:57:14.570947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 17:57:14.570960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:14.570972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:14.570985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:14.570993 | orchestrator | 2025-06-02 17:57:14.571001 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-06-02 17:57:14.571009 | orchestrator | Monday 02 June 2025 17:56:10 +0000 (0:00:07.315) 0:00:59.313 *********** 2025-06-02 17:57:14.571017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 17:57:14.571025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:57:14.571033 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:14.571047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 17:57:14.571063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:57:14.571072 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:14.571080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 17:57:14.571088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:57:14.571096 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:14.571104 | orchestrator | 2025-06-02 17:57:14.571112 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-06-02 17:57:14.571120 | orchestrator | Monday 02 June 2025 17:56:11 +0000 (0:00:01.133) 0:01:00.447 *********** 2025-06-02 17:57:14.571133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 17:57:14.571146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 17:57:14.571158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 17:57:14.571166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:14.571175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:14.571187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:14.571199 | orchestrator | 2025-06-02 17:57:14.571260 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-06-02 17:57:14.571270 | orchestrator | Monday 02 June 2025 17:56:13 +0000 (0:00:02.013) 0:01:02.460 *********** 2025-06-02 17:57:14.571278 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:14.571287 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:14.571295 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:14.571302 | orchestrator | 2025-06-02 17:57:14.571310 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-06-02 17:57:14.571318 | orchestrator | Monday 02 June 2025 17:56:14 +0000 (0:00:00.263) 0:01:02.724 *********** 2025-06-02 17:57:14.571326 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:57:14.571334 | orchestrator | 2025-06-02 17:57:14.571342 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-06-02 17:57:14.571350 | orchestrator | Monday 02 June 2025 17:56:16 +0000 (0:00:02.076) 0:01:04.801 *********** 2025-06-02 17:57:14.571357 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:57:14.571365 | orchestrator | 2025-06-02 17:57:14.571373 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-06-02 17:57:14.571381 | orchestrator | Monday 02 June 2025 17:56:18 +0000 (0:00:02.189) 0:01:06.990 *********** 2025-06-02 17:57:14.571389 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:57:14.571397 | orchestrator | 2025-06-02 17:57:14.571405 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-06-02 17:57:14.571417 | orchestrator | Monday 02 June 2025 17:56:34 +0000 (0:00:16.019) 0:01:23.009 *********** 2025-06-02 17:57:14.571425 | orchestrator | 2025-06-02 17:57:14.571433 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-06-02 17:57:14.571441 | orchestrator | Monday 02 June 2025 17:56:34 +0000 (0:00:00.071) 0:01:23.080 *********** 2025-06-02 17:57:14.571449 | orchestrator | 2025-06-02 17:57:14.571457 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-06-02 17:57:14.571464 | orchestrator | Monday 02 June 2025 17:56:34 +0000 (0:00:00.074) 0:01:23.155 *********** 2025-06-02 17:57:14.571472 | orchestrator | 2025-06-02 17:57:14.571480 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-06-02 17:57:14.571488 | orchestrator | Monday 02 June 2025 17:56:34 +0000 (0:00:00.066) 0:01:23.221 *********** 2025-06-02 17:57:14.571496 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:57:14.571505 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:57:14.571513 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:57:14.571521 | orchestrator | 2025-06-02 17:57:14.571528 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-06-02 17:57:14.571536 | orchestrator | Monday 02 June 2025 17:56:53 +0000 (0:00:19.282) 0:01:42.504 *********** 2025-06-02 17:57:14.571544 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:57:14.571552 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:57:14.571560 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:57:14.571568 | orchestrator | 2025-06-02 17:57:14.571576 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:57:14.571584 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 17:57:14.571593 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 17:57:14.571601 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 17:57:14.571609 | orchestrator | 2025-06-02 17:57:14.571617 | orchestrator | 2025-06-02 17:57:14.571633 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:57:14.571641 | orchestrator | Monday 02 June 2025 17:57:11 +0000 (0:00:17.248) 0:01:59.752 *********** 2025-06-02 17:57:14.571649 | orchestrator | =============================================================================== 2025-06-02 17:57:14.571657 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 19.28s 2025-06-02 17:57:14.571665 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 17.25s 2025-06-02 17:57:14.571673 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 16.02s 2025-06-02 17:57:14.571681 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 7.32s 2025-06-02 17:57:14.571689 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.50s 2025-06-02 17:57:14.571697 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 4.13s 2025-06-02 17:57:14.571705 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.12s 2025-06-02 17:57:14.571713 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.08s 2025-06-02 17:57:14.571720 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.00s 2025-06-02 17:57:14.571728 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.66s 2025-06-02 17:57:14.571736 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.60s 2025-06-02 17:57:14.571744 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.49s 2025-06-02 17:57:14.571752 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.43s 2025-06-02 17:57:14.571760 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.55s 2025-06-02 17:57:14.571768 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.45s 2025-06-02 17:57:14.571776 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.27s 2025-06-02 17:57:14.571790 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.19s 2025-06-02 17:57:14.571799 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.08s 2025-06-02 17:57:14.571806 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.01s 2025-06-02 17:57:14.571815 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.67s 2025-06-02 17:57:14.571823 | orchestrator | 2025-06-02 17:57:14 | INFO  | Task 1c77fa8b-2b3e-4daf-89f2-22e8c3dc6560 is in state STARTED 2025-06-02 17:57:14.571831 | orchestrator | 2025-06-02 17:57:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:57:17.609194 | orchestrator | 2025-06-02 17:57:17 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:57:17.610339 | orchestrator | 2025-06-02 17:57:17 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:57:17.611977 | orchestrator | 2025-06-02 17:57:17 | INFO  | Task 1c77fa8b-2b3e-4daf-89f2-22e8c3dc6560 is in state STARTED 2025-06-02 17:57:17.612022 | orchestrator | 2025-06-02 17:57:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:57:20.668288 | orchestrator | 2025-06-02 17:57:20 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:57:20.670358 | orchestrator | 2025-06-02 17:57:20 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:57:20.672352 | orchestrator | 2025-06-02 17:57:20 | INFO  | Task 1c77fa8b-2b3e-4daf-89f2-22e8c3dc6560 is in state STARTED 2025-06-02 17:57:20.672588 | orchestrator | 2025-06-02 17:57:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:57:23.715944 | orchestrator | 2025-06-02 17:57:23 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:57:23.718399 | orchestrator | 2025-06-02 17:57:23 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:57:23.720410 | orchestrator | 2025-06-02 17:57:23 | INFO  | Task 1c77fa8b-2b3e-4daf-89f2-22e8c3dc6560 is in state STARTED 2025-06-02 17:57:23.720499 | orchestrator | 2025-06-02 17:57:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:57:26.768488 | orchestrator | 2025-06-02 17:57:26 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:57:26.770318 | orchestrator | 2025-06-02 17:57:26 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:57:26.772523 | orchestrator | 2025-06-02 17:57:26 | INFO  | Task 1c77fa8b-2b3e-4daf-89f2-22e8c3dc6560 is in state STARTED 2025-06-02 17:57:26.772572 | orchestrator | 2025-06-02 17:57:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:57:29.814520 | orchestrator | 2025-06-02 17:57:29 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:57:29.814955 | orchestrator | 2025-06-02 17:57:29 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:57:29.816484 | orchestrator | 2025-06-02 17:57:29 | INFO  | Task 1c77fa8b-2b3e-4daf-89f2-22e8c3dc6560 is in state STARTED 2025-06-02 17:57:29.816522 | orchestrator | 2025-06-02 17:57:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:57:32.866562 | orchestrator | 2025-06-02 17:57:32 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:57:32.870634 | orchestrator | 2025-06-02 17:57:32 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:57:32.872608 | orchestrator | 2025-06-02 17:57:32 | INFO  | Task 1c77fa8b-2b3e-4daf-89f2-22e8c3dc6560 is in state STARTED 2025-06-02 17:57:32.872623 | orchestrator | 2025-06-02 17:57:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:57:35.925958 | orchestrator | 2025-06-02 17:57:35 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state STARTED 2025-06-02 17:57:35.928524 | orchestrator | 2025-06-02 17:57:35 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:57:35.929543 | orchestrator | 2025-06-02 17:57:35 | INFO  | Task 1c77fa8b-2b3e-4daf-89f2-22e8c3dc6560 is in state STARTED 2025-06-02 17:57:35.930046 | orchestrator | 2025-06-02 17:57:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:57:38.986401 | orchestrator | 2025-06-02 17:57:38 | INFO  | Task d33c75d6-e4e6-44b5-be9b-eb0f0d134468 is in state SUCCESS 2025-06-02 17:57:38.988005 | orchestrator | 2025-06-02 17:57:38.988067 | orchestrator | 2025-06-02 17:57:38.988077 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 17:57:38.988088 | orchestrator | 2025-06-02 17:57:38.988096 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-06-02 17:57:38.988105 | orchestrator | Monday 02 June 2025 17:48:32 +0000 (0:00:00.351) 0:00:00.351 *********** 2025-06-02 17:57:38.988113 | orchestrator | changed: [testbed-manager] 2025-06-02 17:57:38.988122 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:57:38.988131 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:57:38.988139 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:57:38.988148 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:57:38.988156 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:57:38.988164 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:57:38.988172 | orchestrator | 2025-06-02 17:57:38.988179 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 17:57:38.988187 | orchestrator | Monday 02 June 2025 17:48:32 +0000 (0:00:00.719) 0:00:01.071 *********** 2025-06-02 17:57:38.988195 | orchestrator | changed: [testbed-manager] 2025-06-02 17:57:38.988202 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:57:38.988255 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:57:38.988264 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:57:38.988272 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:57:38.988280 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:57:38.988288 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:57:38.988296 | orchestrator | 2025-06-02 17:57:38.988304 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 17:57:38.988312 | orchestrator | Monday 02 June 2025 17:48:33 +0000 (0:00:00.629) 0:00:01.700 *********** 2025-06-02 17:57:38.988321 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-06-02 17:57:38.988330 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-06-02 17:57:38.988338 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-06-02 17:57:38.988359 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-06-02 17:57:38.988367 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-06-02 17:57:38.988375 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-06-02 17:57:38.988383 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-06-02 17:57:38.988391 | orchestrator | 2025-06-02 17:57:38.988400 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-06-02 17:57:38.988408 | orchestrator | 2025-06-02 17:57:38.988416 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-06-02 17:57:38.988424 | orchestrator | Monday 02 June 2025 17:48:34 +0000 (0:00:00.698) 0:00:02.398 *********** 2025-06-02 17:57:38.988432 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:57:38.988517 | orchestrator | 2025-06-02 17:57:38.988525 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-06-02 17:57:38.988553 | orchestrator | Monday 02 June 2025 17:48:34 +0000 (0:00:00.599) 0:00:02.998 *********** 2025-06-02 17:57:38.988563 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-06-02 17:57:38.988572 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-06-02 17:57:38.988581 | orchestrator | 2025-06-02 17:57:38.988589 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-06-02 17:57:38.988597 | orchestrator | Monday 02 June 2025 17:48:38 +0000 (0:00:03.778) 0:00:06.777 *********** 2025-06-02 17:57:38.988605 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 17:57:38.988614 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 17:57:38.988649 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:57:38.988658 | orchestrator | 2025-06-02 17:57:38.988667 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-06-02 17:57:38.988676 | orchestrator | Monday 02 June 2025 17:48:42 +0000 (0:00:03.551) 0:00:10.328 *********** 2025-06-02 17:57:38.988685 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:57:38.988694 | orchestrator | 2025-06-02 17:57:38.988704 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-06-02 17:57:38.988749 | orchestrator | Monday 02 June 2025 17:48:42 +0000 (0:00:00.607) 0:00:10.936 *********** 2025-06-02 17:57:38.988806 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:57:38.988821 | orchestrator | 2025-06-02 17:57:38.988836 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-06-02 17:57:38.988851 | orchestrator | Monday 02 June 2025 17:48:44 +0000 (0:00:01.729) 0:00:12.666 *********** 2025-06-02 17:57:38.988867 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:57:38.988882 | orchestrator | 2025-06-02 17:57:38.988897 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-02 17:57:38.988912 | orchestrator | Monday 02 June 2025 17:48:47 +0000 (0:00:03.065) 0:00:15.731 *********** 2025-06-02 17:57:38.988927 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:38.988943 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:38.988958 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:38.988973 | orchestrator | 2025-06-02 17:57:38.988985 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-06-02 17:57:38.989012 | orchestrator | Monday 02 June 2025 17:48:48 +0000 (0:00:00.602) 0:00:16.334 *********** 2025-06-02 17:57:38.989021 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:57:38.989050 | orchestrator | 2025-06-02 17:57:38.989058 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-06-02 17:57:38.989067 | orchestrator | Monday 02 June 2025 17:49:17 +0000 (0:00:29.436) 0:00:45.770 *********** 2025-06-02 17:57:38.989075 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:57:38.989083 | orchestrator | 2025-06-02 17:57:38.989092 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-06-02 17:57:38.989102 | orchestrator | Monday 02 June 2025 17:49:31 +0000 (0:00:14.392) 0:01:00.163 *********** 2025-06-02 17:57:38.989111 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:57:38.989121 | orchestrator | 2025-06-02 17:57:38.989130 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-06-02 17:57:38.989139 | orchestrator | Monday 02 June 2025 17:49:42 +0000 (0:00:10.277) 0:01:10.441 *********** 2025-06-02 17:57:38.989163 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:57:38.989171 | orchestrator | 2025-06-02 17:57:38.989179 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-06-02 17:57:38.989188 | orchestrator | Monday 02 June 2025 17:49:43 +0000 (0:00:00.897) 0:01:11.339 *********** 2025-06-02 17:57:38.989195 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:38.989203 | orchestrator | 2025-06-02 17:57:38.989234 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-02 17:57:38.989242 | orchestrator | Monday 02 June 2025 17:49:43 +0000 (0:00:00.436) 0:01:11.776 *********** 2025-06-02 17:57:38.989275 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:57:38.989283 | orchestrator | 2025-06-02 17:57:38.989292 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-06-02 17:57:38.989299 | orchestrator | Monday 02 June 2025 17:49:44 +0000 (0:00:00.480) 0:01:12.256 *********** 2025-06-02 17:57:38.989307 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:57:38.989344 | orchestrator | 2025-06-02 17:57:38.989353 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-06-02 17:57:38.989361 | orchestrator | Monday 02 June 2025 17:49:58 +0000 (0:00:14.474) 0:01:26.730 *********** 2025-06-02 17:57:38.989370 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:38.989378 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:38.989386 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:38.989394 | orchestrator | 2025-06-02 17:57:38.989402 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-06-02 17:57:38.989410 | orchestrator | 2025-06-02 17:57:38.989462 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-06-02 17:57:38.989470 | orchestrator | Monday 02 June 2025 17:49:58 +0000 (0:00:00.375) 0:01:27.106 *********** 2025-06-02 17:57:38.989478 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:57:38.989486 | orchestrator | 2025-06-02 17:57:38.989501 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-06-02 17:57:38.989510 | orchestrator | Monday 02 June 2025 17:49:59 +0000 (0:00:00.616) 0:01:27.723 *********** 2025-06-02 17:57:38.989518 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:38.989526 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:38.989534 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:57:38.989542 | orchestrator | 2025-06-02 17:57:38.989551 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-06-02 17:57:38.989559 | orchestrator | Monday 02 June 2025 17:50:01 +0000 (0:00:02.111) 0:01:29.835 *********** 2025-06-02 17:57:38.989567 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:38.989575 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:38.989583 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:57:38.989591 | orchestrator | 2025-06-02 17:57:38.989608 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-06-02 17:57:38.989616 | orchestrator | Monday 02 June 2025 17:50:03 +0000 (0:00:02.146) 0:01:31.982 *********** 2025-06-02 17:57:38.989624 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:38.989633 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:38.989641 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:38.989649 | orchestrator | 2025-06-02 17:57:38.989657 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-06-02 17:57:38.989664 | orchestrator | Monday 02 June 2025 17:50:04 +0000 (0:00:00.336) 0:01:32.318 *********** 2025-06-02 17:57:38.989672 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-02 17:57:38.989680 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:38.989688 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-02 17:57:38.989696 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:38.989705 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-06-02 17:57:38.989713 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-06-02 17:57:38.989721 | orchestrator | 2025-06-02 17:57:38.989729 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-06-02 17:57:38.989737 | orchestrator | Monday 02 June 2025 17:50:12 +0000 (0:00:08.579) 0:01:40.898 *********** 2025-06-02 17:57:38.989745 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:38.989754 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:38.989762 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:38.989770 | orchestrator | 2025-06-02 17:57:38.989778 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-06-02 17:57:38.989786 | orchestrator | Monday 02 June 2025 17:50:13 +0000 (0:00:00.322) 0:01:41.221 *********** 2025-06-02 17:57:38.989794 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-06-02 17:57:38.989802 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:38.989811 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-02 17:57:38.989819 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:38.989827 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-02 17:57:38.989836 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:38.989844 | orchestrator | 2025-06-02 17:57:38.989853 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-06-02 17:57:38.989861 | orchestrator | Monday 02 June 2025 17:50:13 +0000 (0:00:00.683) 0:01:41.905 *********** 2025-06-02 17:57:38.989870 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:38.989878 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:57:38.989886 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:38.989894 | orchestrator | 2025-06-02 17:57:38.989902 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-06-02 17:57:38.989910 | orchestrator | Monday 02 June 2025 17:50:14 +0000 (0:00:00.540) 0:01:42.445 *********** 2025-06-02 17:57:38.989918 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:38.989926 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:38.989935 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:57:38.989944 | orchestrator | 2025-06-02 17:57:38.989952 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-06-02 17:57:38.989960 | orchestrator | Monday 02 June 2025 17:50:15 +0000 (0:00:00.938) 0:01:43.384 *********** 2025-06-02 17:57:38.989968 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:38.989976 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:38.989994 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:57:38.990003 | orchestrator | 2025-06-02 17:57:38.990010 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-06-02 17:57:38.990067 | orchestrator | Monday 02 June 2025 17:50:17 +0000 (0:00:02.160) 0:01:45.544 *********** 2025-06-02 17:57:38.990075 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:38.990083 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:38.990091 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:57:38.990108 | orchestrator | 2025-06-02 17:57:38.990116 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-06-02 17:57:38.990124 | orchestrator | Monday 02 June 2025 17:50:39 +0000 (0:00:22.376) 0:02:07.921 *********** 2025-06-02 17:57:38.990132 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:38.990139 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:38.990148 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:57:38.990156 | orchestrator | 2025-06-02 17:57:38.990163 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-06-02 17:57:38.990171 | orchestrator | Monday 02 June 2025 17:50:54 +0000 (0:00:14.258) 0:02:22.179 *********** 2025-06-02 17:57:38.990179 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:57:38.990187 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:38.990195 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:38.990203 | orchestrator | 2025-06-02 17:57:38.990265 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-06-02 17:57:38.990273 | orchestrator | Monday 02 June 2025 17:50:54 +0000 (0:00:00.735) 0:02:22.915 *********** 2025-06-02 17:57:38.990281 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:38.990288 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:38.990297 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:57:38.990304 | orchestrator | 2025-06-02 17:57:38.990313 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-06-02 17:57:38.990321 | orchestrator | Monday 02 June 2025 17:51:05 +0000 (0:00:10.539) 0:02:33.455 *********** 2025-06-02 17:57:38.990336 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:38.990344 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:38.990352 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:38.990360 | orchestrator | 2025-06-02 17:57:38.990366 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-06-02 17:57:38.990371 | orchestrator | Monday 02 June 2025 17:51:07 +0000 (0:00:02.133) 0:02:35.588 *********** 2025-06-02 17:57:38.990375 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:38.990380 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:38.990385 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:38.990390 | orchestrator | 2025-06-02 17:57:38.990394 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-06-02 17:57:38.990399 | orchestrator | 2025-06-02 17:57:38.990404 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-02 17:57:38.990409 | orchestrator | Monday 02 June 2025 17:51:07 +0000 (0:00:00.426) 0:02:36.015 *********** 2025-06-02 17:57:38.990414 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:57:38.990420 | orchestrator | 2025-06-02 17:57:38.990425 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-06-02 17:57:38.990429 | orchestrator | Monday 02 June 2025 17:51:08 +0000 (0:00:00.662) 0:02:36.677 *********** 2025-06-02 17:57:38.990434 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-06-02 17:57:38.990439 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-06-02 17:57:38.990444 | orchestrator | 2025-06-02 17:57:38.990449 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-06-02 17:57:38.990453 | orchestrator | Monday 02 June 2025 17:51:11 +0000 (0:00:03.195) 0:02:39.872 *********** 2025-06-02 17:57:38.990458 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-06-02 17:57:38.990464 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-06-02 17:57:38.990469 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-06-02 17:57:38.990474 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-06-02 17:57:38.990479 | orchestrator | 2025-06-02 17:57:38.990484 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-06-02 17:57:38.990495 | orchestrator | Monday 02 June 2025 17:51:18 +0000 (0:00:06.730) 0:02:46.603 *********** 2025-06-02 17:57:38.990500 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 17:57:38.990507 | orchestrator | 2025-06-02 17:57:38.990515 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-06-02 17:57:38.990522 | orchestrator | Monday 02 June 2025 17:51:21 +0000 (0:00:03.125) 0:02:49.729 *********** 2025-06-02 17:57:38.990530 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 17:57:38.990537 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-06-02 17:57:38.990545 | orchestrator | 2025-06-02 17:57:38.990553 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-06-02 17:57:38.990560 | orchestrator | Monday 02 June 2025 17:51:25 +0000 (0:00:03.854) 0:02:53.584 *********** 2025-06-02 17:57:38.990567 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 17:57:38.990574 | orchestrator | 2025-06-02 17:57:38.990581 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-06-02 17:57:38.990588 | orchestrator | Monday 02 June 2025 17:51:28 +0000 (0:00:03.196) 0:02:56.780 *********** 2025-06-02 17:57:38.990594 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-06-02 17:57:38.990602 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-06-02 17:57:38.990609 | orchestrator | 2025-06-02 17:57:38.990616 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-06-02 17:57:38.990637 | orchestrator | Monday 02 June 2025 17:51:36 +0000 (0:00:07.542) 0:03:04.322 *********** 2025-06-02 17:57:38.990656 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 17:57:38.990669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 17:57:38.990685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 17:57:38.990702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:38.990711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:38.990723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:38.990732 | orchestrator | 2025-06-02 17:57:38.990740 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-06-02 17:57:38.990748 | orchestrator | Monday 02 June 2025 17:51:37 +0000 (0:00:01.272) 0:03:05.595 *********** 2025-06-02 17:57:38.990756 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:38.990764 | orchestrator | 2025-06-02 17:57:38.990773 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-06-02 17:57:38.990781 | orchestrator | Monday 02 June 2025 17:51:37 +0000 (0:00:00.123) 0:03:05.719 *********** 2025-06-02 17:57:38.990789 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:38.990797 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:38.990805 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:38.990818 | orchestrator | 2025-06-02 17:57:38.990826 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-06-02 17:57:38.990834 | orchestrator | Monday 02 June 2025 17:51:38 +0000 (0:00:00.558) 0:03:06.277 *********** 2025-06-02 17:57:38.990842 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 17:57:38.990850 | orchestrator | 2025-06-02 17:57:38.990858 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-06-02 17:57:38.990866 | orchestrator | Monday 02 June 2025 17:51:38 +0000 (0:00:00.673) 0:03:06.951 *********** 2025-06-02 17:57:38.990874 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:38.990882 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:38.990890 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:38.990898 | orchestrator | 2025-06-02 17:57:38.990906 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-02 17:57:38.990941 | orchestrator | Monday 02 June 2025 17:51:39 +0000 (0:00:00.327) 0:03:07.278 *********** 2025-06-02 17:57:38.990950 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:57:38.990959 | orchestrator | 2025-06-02 17:57:38.990968 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-06-02 17:57:38.990976 | orchestrator | Monday 02 June 2025 17:51:39 +0000 (0:00:00.724) 0:03:08.003 *********** 2025-06-02 17:57:38.990990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 17:57:38.991000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 17:57:38.991014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 17:57:38.991028 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:38.991036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:38.991081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:38.991091 | orchestrator | 2025-06-02 17:57:38.991099 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-06-02 17:57:38.991110 | orchestrator | Monday 02 June 2025 17:51:42 +0000 (0:00:02.351) 0:03:10.355 *********** 2025-06-02 17:57:38.991128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 17:57:38.991143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:57:38.991153 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:38.991162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 17:57:38.991177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:57:38.991185 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:38.991197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 17:57:38.991268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:57:38.991280 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:38.991288 | orchestrator | 2025-06-02 17:57:38.991296 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-06-02 17:57:38.991305 | orchestrator | Monday 02 June 2025 17:51:42 +0000 (0:00:00.630) 0:03:10.985 *********** 2025-06-02 17:57:38.991314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 17:57:38.991322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:57:38.991327 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:38.991339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 17:57:38.991420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:57:38.991427 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:38.991432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 17:57:38.991437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:57:38.991442 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:38.991447 | orchestrator | 2025-06-02 17:57:38.991452 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-06-02 17:57:38.991457 | orchestrator | Monday 02 June 2025 17:51:43 +0000 (0:00:01.038) 0:03:12.024 *********** 2025-06-02 17:57:38.991467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 17:57:38.991480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 17:57:38.991486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 17:57:38.991495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:38.991501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:38.991510 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:38.991515 | orchestrator | 2025-06-02 17:57:38.991520 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-06-02 17:57:38.991527 | orchestrator | Monday 02 June 2025 17:51:47 +0000 (0:00:03.259) 0:03:15.284 *********** 2025-06-02 17:57:38.991533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 17:57:38.991538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 17:57:38.991547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 17:57:38.991559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:38.991564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:38.991568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:38.991573 | orchestrator | 2025-06-02 17:57:38.991578 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-06-02 17:57:38.991582 | orchestrator | Monday 02 June 2025 17:51:55 +0000 (0:00:08.767) 0:03:24.051 *********** 2025-06-02 17:57:38.991591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 17:57:38.991600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:57:38.991605 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:38.991612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 17:57:38.991617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:57:38.991622 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:38.991627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 17:57:38.991640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:57:38.991645 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:38.991650 | orchestrator | 2025-06-02 17:57:38.991654 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-06-02 17:57:38.991659 | orchestrator | Monday 02 June 2025 17:51:56 +0000 (0:00:01.051) 0:03:25.103 *********** 2025-06-02 17:57:38.991664 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:57:38.991668 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:57:38.991673 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:57:38.991677 | orchestrator | 2025-06-02 17:57:38.991682 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-06-02 17:57:38.991686 | orchestrator | Monday 02 June 2025 17:51:59 +0000 (0:00:02.775) 0:03:27.879 *********** 2025-06-02 17:57:38.991691 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:38.991695 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:38.991700 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:38.991704 | orchestrator | 2025-06-02 17:57:38.991709 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-06-02 17:57:38.991713 | orchestrator | Monday 02 June 2025 17:52:00 +0000 (0:00:00.302) 0:03:28.181 *********** 2025-06-02 17:57:38.991723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 17:57:38.991728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 17:57:38.991743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 17:57:38.991751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:38.991756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:38.991761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:38.991765 | orchestrator | 2025-06-02 17:57:38.991770 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-06-02 17:57:38.991775 | orchestrator | Monday 02 June 2025 17:52:02 +0000 (0:00:02.298) 0:03:30.480 *********** 2025-06-02 17:57:38.991779 | orchestrator | 2025-06-02 17:57:38.991784 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-06-02 17:57:38.991788 | orchestrator | Monday 02 June 2025 17:52:02 +0000 (0:00:00.240) 0:03:30.721 *********** 2025-06-02 17:57:38.991793 | orchestrator | 2025-06-02 17:57:38.991797 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-06-02 17:57:38.991805 | orchestrator | Monday 02 June 2025 17:52:02 +0000 (0:00:00.133) 0:03:30.854 *********** 2025-06-02 17:57:38.991810 | orchestrator | 2025-06-02 17:57:38.991815 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-06-02 17:57:38.991819 | orchestrator | Monday 02 June 2025 17:52:02 +0000 (0:00:00.227) 0:03:31.082 *********** 2025-06-02 17:57:38.991824 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:57:38.991828 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:57:38.991833 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:57:38.991837 | orchestrator | 2025-06-02 17:57:38.991842 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-06-02 17:57:38.991847 | orchestrator | Monday 02 June 2025 17:52:24 +0000 (0:00:21.508) 0:03:52.591 *********** 2025-06-02 17:57:38.991851 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:57:38.991855 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:57:38.991860 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:57:38.991864 | orchestrator | 2025-06-02 17:57:38.991869 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-06-02 17:57:38.991874 | orchestrator | 2025-06-02 17:57:38.991878 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-02 17:57:38.991883 | orchestrator | Monday 02 June 2025 17:52:37 +0000 (0:00:13.092) 0:04:05.683 *********** 2025-06-02 17:57:38.991888 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:57:38.991893 | orchestrator | 2025-06-02 17:57:38.991900 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-02 17:57:38.991905 | orchestrator | Monday 02 June 2025 17:52:39 +0000 (0:00:01.775) 0:04:07.459 *********** 2025-06-02 17:57:38.991910 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:57:38.991915 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:57:38.991919 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:57:38.991924 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:38.991928 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:38.991933 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:38.991937 | orchestrator | 2025-06-02 17:57:38.991942 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-06-02 17:57:38.991947 | orchestrator | Monday 02 June 2025 17:52:40 +0000 (0:00:01.521) 0:04:08.981 *********** 2025-06-02 17:57:38.991951 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:38.991955 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:38.991960 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:38.991965 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:57:38.991972 | orchestrator | 2025-06-02 17:57:38.991979 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-06-02 17:57:38.991987 | orchestrator | Monday 02 June 2025 17:52:42 +0000 (0:00:01.947) 0:04:10.928 *********** 2025-06-02 17:57:38.991994 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-06-02 17:57:38.992001 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-06-02 17:57:38.992008 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-06-02 17:57:38.992016 | orchestrator | 2025-06-02 17:57:38.992024 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-06-02 17:57:38.992033 | orchestrator | Monday 02 June 2025 17:52:44 +0000 (0:00:01.313) 0:04:12.242 *********** 2025-06-02 17:57:38.992040 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-06-02 17:57:38.992047 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-06-02 17:57:38.992059 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-06-02 17:57:38.992067 | orchestrator | 2025-06-02 17:57:38.992075 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-06-02 17:57:38.992084 | orchestrator | Monday 02 June 2025 17:52:45 +0000 (0:00:01.364) 0:04:13.607 *********** 2025-06-02 17:57:38.992099 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-06-02 17:57:38.992106 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:57:38.992114 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-06-02 17:57:38.992121 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:57:38.992128 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-06-02 17:57:38.992135 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:57:38.992143 | orchestrator | 2025-06-02 17:57:38.992150 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-06-02 17:57:38.992158 | orchestrator | Monday 02 June 2025 17:52:46 +0000 (0:00:01.064) 0:04:14.671 *********** 2025-06-02 17:57:38.992166 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 17:57:38.992173 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 17:57:38.992180 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:38.992187 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 17:57:38.992194 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 17:57:38.992201 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:38.992257 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 17:57:38.992266 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 17:57:38.992274 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:38.992281 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-06-02 17:57:38.992289 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-06-02 17:57:38.992296 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-06-02 17:57:38.992304 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-06-02 17:57:38.992312 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-06-02 17:57:38.992319 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-06-02 17:57:38.992327 | orchestrator | 2025-06-02 17:57:38.992335 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-06-02 17:57:38.992343 | orchestrator | Monday 02 June 2025 17:52:48 +0000 (0:00:02.262) 0:04:16.934 *********** 2025-06-02 17:57:38.992350 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:38.992358 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:57:38.992365 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:38.992373 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:57:38.992381 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:38.992388 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:57:38.992396 | orchestrator | 2025-06-02 17:57:38.992404 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-06-02 17:57:38.992411 | orchestrator | Monday 02 June 2025 17:52:51 +0000 (0:00:02.424) 0:04:19.358 *********** 2025-06-02 17:57:38.992419 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:38.992427 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:38.992435 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:38.992442 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:57:38.992450 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:57:38.992458 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:57:38.992465 | orchestrator | 2025-06-02 17:57:38.992473 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-06-02 17:57:38.992481 | orchestrator | Monday 02 June 2025 17:52:53 +0000 (0:00:02.179) 0:04:21.538 *********** 2025-06-02 17:57:38.992497 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 17:57:38.992517 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 17:57:38.992527 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 17:57:38.992536 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:38.992545 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 17:57:38.992559 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 17:57:38.992573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 17:57:38.992585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 17:57:38.992593 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:38.992601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 17:57:38.992609 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 17:57:38.992622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:38.992635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:38.992646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:38.992655 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:38.992663 | orchestrator | 2025-06-02 17:57:38.992670 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-02 17:57:38.992678 | orchestrator | Monday 02 June 2025 17:52:56 +0000 (0:00:03.359) 0:04:24.898 *********** 2025-06-02 17:57:38.992686 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:57:38.992694 | orchestrator | 2025-06-02 17:57:38.992702 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-06-02 17:57:38.992710 | orchestrator | Monday 02 June 2025 17:52:57 +0000 (0:00:01.007) 0:04:25.905 *********** 2025-06-02 17:57:38.992718 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 17:57:38.992737 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 17:57:38.992749 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 17:57:38.992757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 17:57:38.992766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 17:57:38.992774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 17:57:38.992782 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 17:57:38.992838 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 17:57:38.992847 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 17:57:38.992861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:38.992869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:38.992878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:38.992885 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:38.992904 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:38.992912 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:38.992919 | orchestrator | 2025-06-02 17:57:38.992926 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-06-02 17:57:38.992932 | orchestrator | Monday 02 June 2025 17:53:02 +0000 (0:00:04.885) 0:04:30.791 *********** 2025-06-02 17:57:38.992943 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 17:57:38.992951 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 17:57:38.992958 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-02 17:57:38.992970 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:57:38.993203 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 17:57:38.993237 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 17:57:38.993250 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-02 17:57:38.993258 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:57:38.993265 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 17:57:38.993272 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 17:57:38.993286 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-02 17:57:38.993291 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:57:38.993300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-02 17:57:38.993305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:57:38.993309 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:38.993325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-02 17:57:38.993330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:57:38.993334 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:38.993338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-02 17:57:38.993347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:57:38.993351 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:38.993356 | orchestrator | 2025-06-02 17:57:38.993360 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-06-02 17:57:38.993364 | orchestrator | Monday 02 June 2025 17:53:06 +0000 (0:00:04.144) 0:04:34.936 *********** 2025-06-02 17:57:38.993372 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 17:57:38.993378 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 17:57:38.993385 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-02 17:57:38.993389 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:57:38.993394 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 17:57:38.993401 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 17:57:38.993409 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-02 17:57:38.993413 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:57:38.993418 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 17:57:38.993425 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 17:57:38.993429 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-02 17:57:38.993437 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:57:38.993442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-02 17:57:38.993447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:57:38.993451 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:38.993459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-02 17:57:38.993463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:57:38.993468 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:38.993475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-02 17:57:38.993479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:57:38.993487 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:38.993491 | orchestrator | 2025-06-02 17:57:38.993495 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-02 17:57:38.993499 | orchestrator | Monday 02 June 2025 17:53:09 +0000 (0:00:02.649) 0:04:37.585 *********** 2025-06-02 17:57:38.993504 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:38.993508 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:38.993512 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:38.993516 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:57:38.993520 | orchestrator | 2025-06-02 17:57:38.993524 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-06-02 17:57:38.993528 | orchestrator | Monday 02 June 2025 17:53:10 +0000 (0:00:00.941) 0:04:38.527 *********** 2025-06-02 17:57:38.993533 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-02 17:57:38.993537 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-02 17:57:38.993541 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-02 17:57:38.993545 | orchestrator | 2025-06-02 17:57:38.993549 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-06-02 17:57:38.993553 | orchestrator | Monday 02 June 2025 17:53:13 +0000 (0:00:02.918) 0:04:41.446 *********** 2025-06-02 17:57:38.993557 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-02 17:57:38.993561 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-02 17:57:38.993566 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-02 17:57:38.993570 | orchestrator | 2025-06-02 17:57:38.993574 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-06-02 17:57:38.993578 | orchestrator | Monday 02 June 2025 17:53:15 +0000 (0:00:02.670) 0:04:44.117 *********** 2025-06-02 17:57:38.993582 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:57:38.993586 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:57:38.993590 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:57:38.993594 | orchestrator | 2025-06-02 17:57:38.993599 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-06-02 17:57:38.993603 | orchestrator | Monday 02 June 2025 17:53:17 +0000 (0:00:01.310) 0:04:45.427 *********** 2025-06-02 17:57:38.993607 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:57:38.993611 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:57:38.993615 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:57:38.993619 | orchestrator | 2025-06-02 17:57:38.993623 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-06-02 17:57:38.993627 | orchestrator | Monday 02 June 2025 17:53:18 +0000 (0:00:00.971) 0:04:46.399 *********** 2025-06-02 17:57:38.993631 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-06-02 17:57:38.993638 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-06-02 17:57:38.993642 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-06-02 17:57:38.993647 | orchestrator | 2025-06-02 17:57:38.993651 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-06-02 17:57:38.993655 | orchestrator | Monday 02 June 2025 17:53:19 +0000 (0:00:01.587) 0:04:47.987 *********** 2025-06-02 17:57:38.993659 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-06-02 17:57:38.993663 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-06-02 17:57:38.993667 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-06-02 17:57:38.993671 | orchestrator | 2025-06-02 17:57:38.993675 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-06-02 17:57:38.993683 | orchestrator | Monday 02 June 2025 17:53:21 +0000 (0:00:01.633) 0:04:49.621 *********** 2025-06-02 17:57:38.993687 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-06-02 17:57:38.993691 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-06-02 17:57:38.993695 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-06-02 17:57:38.993699 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-06-02 17:57:38.993703 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-06-02 17:57:38.993707 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-06-02 17:57:38.993711 | orchestrator | 2025-06-02 17:57:38.993716 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-06-02 17:57:38.993720 | orchestrator | Monday 02 June 2025 17:53:27 +0000 (0:00:05.970) 0:04:55.592 *********** 2025-06-02 17:57:38.993724 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:57:38.993728 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:57:38.993732 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:57:38.993736 | orchestrator | 2025-06-02 17:57:38.993745 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-06-02 17:57:38.993749 | orchestrator | Monday 02 June 2025 17:53:28 +0000 (0:00:00.617) 0:04:56.209 *********** 2025-06-02 17:57:38.993762 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:57:38.993767 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:57:38.993771 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:57:38.993775 | orchestrator | 2025-06-02 17:57:38.993786 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-06-02 17:57:38.993790 | orchestrator | Monday 02 June 2025 17:53:28 +0000 (0:00:00.760) 0:04:56.970 *********** 2025-06-02 17:57:38.993794 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:57:38.993798 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:57:38.993802 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:57:38.993806 | orchestrator | 2025-06-02 17:57:38.993810 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-06-02 17:57:38.993814 | orchestrator | Monday 02 June 2025 17:53:31 +0000 (0:00:02.513) 0:04:59.483 *********** 2025-06-02 17:57:38.993819 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-06-02 17:57:38.993824 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-06-02 17:57:38.993828 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-06-02 17:57:38.993833 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-06-02 17:57:38.993838 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-06-02 17:57:38.993843 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-06-02 17:57:38.993848 | orchestrator | 2025-06-02 17:57:38.993856 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-06-02 17:57:38.993862 | orchestrator | Monday 02 June 2025 17:53:36 +0000 (0:00:05.225) 0:05:04.709 *********** 2025-06-02 17:57:38.993869 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-02 17:57:38.993876 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-02 17:57:38.993882 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-02 17:57:38.993889 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-02 17:57:38.993896 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:57:38.993902 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-02 17:57:38.993909 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:57:38.993920 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-02 17:57:38.993933 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:57:38.993940 | orchestrator | 2025-06-02 17:57:38.993947 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-06-02 17:57:38.993956 | orchestrator | Monday 02 June 2025 17:53:41 +0000 (0:00:05.283) 0:05:09.992 *********** 2025-06-02 17:57:38.993964 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:57:38.993973 | orchestrator | 2025-06-02 17:57:38.993980 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-06-02 17:57:38.993987 | orchestrator | Monday 02 June 2025 17:53:42 +0000 (0:00:00.239) 0:05:10.232 *********** 2025-06-02 17:57:38.993994 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:57:38.994001 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:57:38.994008 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:57:38.994039 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:38.994049 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:38.994055 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:38.994061 | orchestrator | 2025-06-02 17:57:38.994068 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-06-02 17:57:38.994080 | orchestrator | Monday 02 June 2025 17:53:43 +0000 (0:00:01.626) 0:05:11.859 *********** 2025-06-02 17:57:38.994087 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-02 17:57:38.994094 | orchestrator | 2025-06-02 17:57:38.994101 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-06-02 17:57:38.994108 | orchestrator | Monday 02 June 2025 17:53:44 +0000 (0:00:00.899) 0:05:12.758 *********** 2025-06-02 17:57:38.994116 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:57:38.994123 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:57:38.994130 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:57:38.994137 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:38.994144 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:38.994151 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:38.994158 | orchestrator | 2025-06-02 17:57:38.994165 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-06-02 17:57:38.994172 | orchestrator | Monday 02 June 2025 17:53:45 +0000 (0:00:00.606) 0:05:13.364 *********** 2025-06-02 17:57:38.994184 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 17:57:38.994192 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 17:57:38.994206 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 17:57:38.994247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 17:57:38.994259 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 17:57:38.994267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 17:57:38.994278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 17:57:38.994285 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 17:57:38.994297 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 17:57:38.994303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:38.994312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:38.994326 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:38.994337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:38.994345 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:38.994358 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:38.994365 | orchestrator | 2025-06-02 17:57:38.994372 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-06-02 17:57:38.994379 | orchestrator | Monday 02 June 2025 17:53:49 +0000 (0:00:03.876) 0:05:17.241 *********** 2025-06-02 17:57:38.994387 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 17:57:38.994398 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 17:57:38.994408 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 17:57:38.994415 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 17:57:38.994428 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 17:57:38.994435 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 17:57:38.994446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 17:57:38.994454 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:38.994464 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:38.994471 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:38.994483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 17:57:38.994490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 17:57:38.994502 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:38.994510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:38.994520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:38.994527 | orchestrator | 2025-06-02 17:57:38.994534 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-06-02 17:57:38.994546 | orchestrator | Monday 02 June 2025 17:53:57 +0000 (0:00:07.987) 0:05:25.228 *********** 2025-06-02 17:57:38.994553 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:57:38.994560 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:57:38.994567 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:57:38.994574 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:38.994580 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:38.994588 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:38.994594 | orchestrator | 2025-06-02 17:57:38.994601 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-06-02 17:57:38.994608 | orchestrator | Monday 02 June 2025 17:53:58 +0000 (0:00:01.643) 0:05:26.871 *********** 2025-06-02 17:57:38.994615 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-06-02 17:57:38.994622 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-06-02 17:57:38.994629 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-06-02 17:57:38.994636 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-06-02 17:57:38.994643 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-06-02 17:57:38.994650 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-06-02 17:57:38.994657 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-06-02 17:57:38.994663 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:38.994670 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-06-02 17:57:38.994677 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:38.994684 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-06-02 17:57:38.994691 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:38.994699 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-06-02 17:57:38.994706 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-06-02 17:57:38.994713 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-06-02 17:57:38.994721 | orchestrator | 2025-06-02 17:57:38.994728 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-06-02 17:57:38.994735 | orchestrator | Monday 02 June 2025 17:54:02 +0000 (0:00:03.850) 0:05:30.721 *********** 2025-06-02 17:57:38.994742 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:57:38.994749 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:57:38.994756 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:57:38.994763 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:38.994770 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:38.994776 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:38.994782 | orchestrator | 2025-06-02 17:57:38.994788 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-06-02 17:57:38.994796 | orchestrator | Monday 02 June 2025 17:54:03 +0000 (0:00:00.852) 0:05:31.573 *********** 2025-06-02 17:57:38.994803 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-06-02 17:57:38.994809 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-06-02 17:57:38.994820 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-06-02 17:57:38.994827 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-06-02 17:57:38.994835 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-06-02 17:57:38.994851 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-06-02 17:57:38.994859 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-06-02 17:57:38.994865 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-06-02 17:57:38.994874 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-06-02 17:57:38.994881 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-06-02 17:57:38.994888 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:38.994894 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-06-02 17:57:38.994900 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:38.994907 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-06-02 17:57:38.994914 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:38.994927 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-06-02 17:57:38.994934 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-06-02 17:57:38.994941 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-06-02 17:57:38.994947 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-06-02 17:57:38.994954 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-06-02 17:57:38.994961 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-06-02 17:57:38.994966 | orchestrator | 2025-06-02 17:57:38.994970 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-06-02 17:57:38.994974 | orchestrator | Monday 02 June 2025 17:54:09 +0000 (0:00:05.834) 0:05:37.407 *********** 2025-06-02 17:57:38.994978 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-06-02 17:57:38.994982 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-06-02 17:57:38.994986 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-06-02 17:57:38.994990 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-02 17:57:38.994994 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-06-02 17:57:38.994998 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-06-02 17:57:38.995002 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-06-02 17:57:38.995006 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-02 17:57:38.995010 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-02 17:57:38.995014 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-02 17:57:38.995018 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-02 17:57:38.995022 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-02 17:57:38.995026 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-02 17:57:38.995031 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-02 17:57:38.995040 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-06-02 17:57:38.995044 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:38.995048 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-06-02 17:57:38.995052 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:38.995056 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-02 17:57:38.995060 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-06-02 17:57:38.995064 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:38.995068 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-02 17:57:38.995073 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-02 17:57:38.995080 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-02 17:57:38.995084 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-02 17:57:38.995088 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-02 17:57:38.995092 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-02 17:57:38.995096 | orchestrator | 2025-06-02 17:57:38.995101 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-06-02 17:57:38.995105 | orchestrator | Monday 02 June 2025 17:54:17 +0000 (0:00:08.489) 0:05:45.897 *********** 2025-06-02 17:57:38.995109 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:57:38.995113 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:57:38.995117 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:57:38.995121 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:38.995125 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:38.995129 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:38.995133 | orchestrator | 2025-06-02 17:57:38.995137 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-06-02 17:57:38.995141 | orchestrator | Monday 02 June 2025 17:54:18 +0000 (0:00:00.506) 0:05:46.404 *********** 2025-06-02 17:57:38.995146 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:57:38.995150 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:57:38.995154 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:57:38.995158 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:38.995162 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:38.995168 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:38.995175 | orchestrator | 2025-06-02 17:57:38.995181 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-06-02 17:57:38.995188 | orchestrator | Monday 02 June 2025 17:54:19 +0000 (0:00:00.828) 0:05:47.232 *********** 2025-06-02 17:57:38.995198 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:38.995205 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:38.995253 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:38.995261 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:57:38.995266 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:57:38.995270 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:57:38.995274 | orchestrator | 2025-06-02 17:57:38.995278 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-06-02 17:57:38.995282 | orchestrator | Monday 02 June 2025 17:54:21 +0000 (0:00:02.507) 0:05:49.740 *********** 2025-06-02 17:57:38.995287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-02 17:57:38.995297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:57:38.995302 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:38.995306 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 17:57:38.995316 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 17:57:38.995321 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-02 17:57:38.995325 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:57:38.995333 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 17:57:38.995342 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 17:57:38.995346 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-02 17:57:38.995350 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:57:38.995358 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 17:57:38.995363 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 17:57:38.995370 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-02 17:57:38.995377 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:57:38.995381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-02 17:57:38.995385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:57:38.995389 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:38.995392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-02 17:57:38.995443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:57:38.995452 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:38.995458 | orchestrator | 2025-06-02 17:57:38.995464 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-06-02 17:57:38.995470 | orchestrator | Monday 02 June 2025 17:54:24 +0000 (0:00:02.751) 0:05:52.492 *********** 2025-06-02 17:57:38.995476 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-06-02 17:57:38.995482 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-06-02 17:57:38.995489 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:57:38.995494 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-06-02 17:57:38.995499 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-06-02 17:57:38.995505 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-06-02 17:57:38.995511 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-06-02 17:57:38.995517 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:57:38.995524 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-06-02 17:57:38.995530 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:57:38.995536 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-06-02 17:57:38.995542 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-06-02 17:57:38.995548 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-06-02 17:57:38.995560 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:38.995566 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:38.995569 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-06-02 17:57:38.995573 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-06-02 17:57:38.995580 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:38.995584 | orchestrator | 2025-06-02 17:57:38.995588 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-06-02 17:57:38.995591 | orchestrator | Monday 02 June 2025 17:54:25 +0000 (0:00:00.750) 0:05:53.242 *********** 2025-06-02 17:57:38.995595 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 17:57:38.995600 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 17:57:38.995607 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 17:57:38.995611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 17:57:38.995619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 17:57:38.995625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 17:57:38.995629 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 17:57:38.995633 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 17:57:38.995637 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 17:57:38.995644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:38.995648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:38.995658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:38.995663 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:38.995667 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:38.995671 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 17:57:38.995675 | orchestrator | 2025-06-02 17:57:38.995678 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-02 17:57:38.995683 | orchestrator | Monday 02 June 2025 17:54:28 +0000 (0:00:03.213) 0:05:56.456 *********** 2025-06-02 17:57:38.995687 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:57:38.995690 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:57:38.995694 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:57:38.995701 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:38.995704 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:38.995708 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:38.995715 | orchestrator | 2025-06-02 17:57:38.995719 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-02 17:57:38.995722 | orchestrator | Monday 02 June 2025 17:54:28 +0000 (0:00:00.527) 0:05:56.984 *********** 2025-06-02 17:57:38.995726 | orchestrator | 2025-06-02 17:57:38.995730 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-02 17:57:38.995734 | orchestrator | Monday 02 June 2025 17:54:29 +0000 (0:00:00.248) 0:05:57.233 *********** 2025-06-02 17:57:38.995737 | orchestrator | 2025-06-02 17:57:38.995741 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-02 17:57:38.995745 | orchestrator | Monday 02 June 2025 17:54:29 +0000 (0:00:00.167) 0:05:57.400 *********** 2025-06-02 17:57:38.995748 | orchestrator | 2025-06-02 17:57:38.995752 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-02 17:57:38.995756 | orchestrator | Monday 02 June 2025 17:54:29 +0000 (0:00:00.175) 0:05:57.576 *********** 2025-06-02 17:57:38.995760 | orchestrator | 2025-06-02 17:57:38.995763 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-02 17:57:38.995767 | orchestrator | Monday 02 June 2025 17:54:29 +0000 (0:00:00.124) 0:05:57.701 *********** 2025-06-02 17:57:38.995771 | orchestrator | 2025-06-02 17:57:38.995774 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-02 17:57:38.995778 | orchestrator | Monday 02 June 2025 17:54:29 +0000 (0:00:00.114) 0:05:57.815 *********** 2025-06-02 17:57:38.995782 | orchestrator | 2025-06-02 17:57:38.995785 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-06-02 17:57:38.995789 | orchestrator | Monday 02 June 2025 17:54:29 +0000 (0:00:00.118) 0:05:57.934 *********** 2025-06-02 17:57:38.995793 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:57:38.995803 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:57:38.995810 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:57:38.995816 | orchestrator | 2025-06-02 17:57:38.995822 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-06-02 17:57:38.995828 | orchestrator | Monday 02 June 2025 17:54:36 +0000 (0:00:06.677) 0:06:04.611 *********** 2025-06-02 17:57:38.995834 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:57:38.995840 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:57:38.995848 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:57:38.995854 | orchestrator | 2025-06-02 17:57:38.995860 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-06-02 17:57:38.995867 | orchestrator | Monday 02 June 2025 17:54:56 +0000 (0:00:19.881) 0:06:24.493 *********** 2025-06-02 17:57:38.995872 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:57:38.995879 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:57:38.995885 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:57:38.995893 | orchestrator | 2025-06-02 17:57:38.995901 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-06-02 17:57:38.995909 | orchestrator | Monday 02 June 2025 17:55:21 +0000 (0:00:25.345) 0:06:49.838 *********** 2025-06-02 17:57:38.995915 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:57:38.995921 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:57:38.995927 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:57:38.995933 | orchestrator | 2025-06-02 17:57:38.995939 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-06-02 17:57:38.995945 | orchestrator | Monday 02 June 2025 17:55:54 +0000 (0:00:32.377) 0:07:22.216 *********** 2025-06-02 17:57:38.995951 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2025-06-02 17:57:38.995958 | orchestrator | FAILED - RETRYING: [testbed-node-3]: Checking libvirt container is ready (10 retries left). 2025-06-02 17:57:38.995965 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2025-06-02 17:57:38.995971 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:57:38.995978 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:57:38.995983 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:57:38.995994 | orchestrator | 2025-06-02 17:57:38.996000 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-06-02 17:57:38.996006 | orchestrator | Monday 02 June 2025 17:56:00 +0000 (0:00:06.550) 0:07:28.767 *********** 2025-06-02 17:57:38.996013 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:57:38.996019 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:57:38.996025 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:57:38.996031 | orchestrator | 2025-06-02 17:57:38.996037 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-06-02 17:57:38.996044 | orchestrator | Monday 02 June 2025 17:56:01 +0000 (0:00:00.836) 0:07:29.603 *********** 2025-06-02 17:57:38.996050 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:57:38.996056 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:57:38.996062 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:57:38.996068 | orchestrator | 2025-06-02 17:57:38.996074 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-06-02 17:57:38.996081 | orchestrator | Monday 02 June 2025 17:56:29 +0000 (0:00:27.802) 0:07:57.405 *********** 2025-06-02 17:57:38.996087 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:57:38.996093 | orchestrator | 2025-06-02 17:57:38.996100 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-06-02 17:57:38.996106 | orchestrator | Monday 02 June 2025 17:56:29 +0000 (0:00:00.142) 0:07:57.547 *********** 2025-06-02 17:57:38.996112 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:57:38.996119 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:38.996125 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:57:38.996131 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:38.996137 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:38.996144 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-06-02 17:57:38.996151 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-02 17:57:38.996157 | orchestrator | 2025-06-02 17:57:38.996167 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-06-02 17:57:38.996173 | orchestrator | Monday 02 June 2025 17:56:51 +0000 (0:00:22.207) 0:08:19.755 *********** 2025-06-02 17:57:38.996179 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:57:38.996185 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:57:38.996191 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:57:38.996197 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:38.996204 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:38.996227 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:38.996234 | orchestrator | 2025-06-02 17:57:38.996240 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-06-02 17:57:38.996246 | orchestrator | Monday 02 June 2025 17:57:01 +0000 (0:00:10.231) 0:08:29.986 *********** 2025-06-02 17:57:38.996253 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:38.996259 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:57:38.996265 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:57:38.996271 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:38.996278 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:38.996284 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2025-06-02 17:57:38.996290 | orchestrator | 2025-06-02 17:57:38.996297 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-06-02 17:57:38.996303 | orchestrator | Monday 02 June 2025 17:57:05 +0000 (0:00:03.967) 0:08:33.954 *********** 2025-06-02 17:57:38.996309 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-02 17:57:38.996316 | orchestrator | 2025-06-02 17:57:38.996322 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-06-02 17:57:38.996328 | orchestrator | Monday 02 June 2025 17:57:17 +0000 (0:00:11.945) 0:08:45.900 *********** 2025-06-02 17:57:38.996335 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-02 17:57:38.996346 | orchestrator | 2025-06-02 17:57:38.996352 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-06-02 17:57:38.996359 | orchestrator | Monday 02 June 2025 17:57:19 +0000 (0:00:01.393) 0:08:47.293 *********** 2025-06-02 17:57:38.996366 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:57:38.996372 | orchestrator | 2025-06-02 17:57:38.996379 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-06-02 17:57:38.996385 | orchestrator | Monday 02 June 2025 17:57:20 +0000 (0:00:01.326) 0:08:48.620 *********** 2025-06-02 17:57:38.996392 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-02 17:57:38.996398 | orchestrator | 2025-06-02 17:57:38.996404 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-06-02 17:57:38.996411 | orchestrator | Monday 02 June 2025 17:57:31 +0000 (0:00:10.874) 0:08:59.495 *********** 2025-06-02 17:57:38.996417 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:57:38.996424 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:57:38.996430 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:57:38.996436 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:57:38.996442 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:57:38.996449 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:57:38.996455 | orchestrator | 2025-06-02 17:57:38.996462 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-06-02 17:57:38.996468 | orchestrator | 2025-06-02 17:57:38.996474 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-06-02 17:57:38.996480 | orchestrator | Monday 02 June 2025 17:57:33 +0000 (0:00:01.796) 0:09:01.291 *********** 2025-06-02 17:57:38.996487 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:57:38.996493 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:57:38.996500 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:57:38.996507 | orchestrator | 2025-06-02 17:57:38.996513 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-06-02 17:57:38.996520 | orchestrator | 2025-06-02 17:57:38.996527 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-06-02 17:57:38.996534 | orchestrator | Monday 02 June 2025 17:57:34 +0000 (0:00:01.106) 0:09:02.398 *********** 2025-06-02 17:57:38.996540 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:38.996546 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:38.996551 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:38.996558 | orchestrator | 2025-06-02 17:57:38.996564 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-06-02 17:57:38.996571 | orchestrator | 2025-06-02 17:57:38.996578 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-06-02 17:57:38.996585 | orchestrator | Monday 02 June 2025 17:57:34 +0000 (0:00:00.520) 0:09:02.919 *********** 2025-06-02 17:57:38.996593 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-06-02 17:57:38.996599 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-06-02 17:57:38.996606 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-06-02 17:57:38.996612 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-06-02 17:57:38.996619 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-06-02 17:57:38.996625 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-06-02 17:57:38.996631 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:57:38.996637 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-06-02 17:57:38.996643 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-06-02 17:57:38.996649 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-06-02 17:57:38.996655 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-06-02 17:57:38.996662 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-06-02 17:57:38.996701 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-06-02 17:57:38.996710 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:57:38.996714 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-06-02 17:57:38.996718 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-06-02 17:57:38.996722 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-06-02 17:57:38.996730 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-06-02 17:57:38.996734 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-06-02 17:57:38.996737 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-06-02 17:57:38.996741 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:57:38.996745 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-06-02 17:57:38.996749 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-06-02 17:57:38.996752 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-06-02 17:57:38.996756 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-06-02 17:57:38.996760 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-06-02 17:57:38.996763 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-06-02 17:57:38.996767 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:38.996771 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-06-02 17:57:38.996774 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-06-02 17:57:38.996778 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-06-02 17:57:38.996782 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-06-02 17:57:38.996786 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-06-02 17:57:38.996789 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-06-02 17:57:38.996793 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:38.996797 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-06-02 17:57:38.996801 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-06-02 17:57:38.996804 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-06-02 17:57:38.996810 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-06-02 17:57:38.996814 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-06-02 17:57:38.996818 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-06-02 17:57:38.996822 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:38.996826 | orchestrator | 2025-06-02 17:57:38.996829 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-06-02 17:57:38.996833 | orchestrator | 2025-06-02 17:57:38.996837 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-06-02 17:57:38.996841 | orchestrator | Monday 02 June 2025 17:57:36 +0000 (0:00:01.299) 0:09:04.218 *********** 2025-06-02 17:57:38.996844 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-06-02 17:57:38.996848 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-06-02 17:57:38.996852 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:38.996856 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-06-02 17:57:38.996860 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-06-02 17:57:38.996864 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:38.996867 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-06-02 17:57:38.996871 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-06-02 17:57:38.996875 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:38.996878 | orchestrator | 2025-06-02 17:57:38.996882 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-06-02 17:57:38.996886 | orchestrator | 2025-06-02 17:57:38.996890 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-06-02 17:57:38.996893 | orchestrator | Monday 02 June 2025 17:57:36 +0000 (0:00:00.827) 0:09:05.046 *********** 2025-06-02 17:57:38.996901 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:38.996905 | orchestrator | 2025-06-02 17:57:38.996909 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-06-02 17:57:38.996913 | orchestrator | 2025-06-02 17:57:38.996916 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-06-02 17:57:38.996920 | orchestrator | Monday 02 June 2025 17:57:37 +0000 (0:00:00.654) 0:09:05.700 *********** 2025-06-02 17:57:38.996924 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:38.996927 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:38.996931 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:38.996935 | orchestrator | 2025-06-02 17:57:38.996938 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:57:38.996942 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:57:38.996947 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-06-02 17:57:38.996952 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-06-02 17:57:38.996956 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-06-02 17:57:38.996959 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-06-02 17:57:38.996963 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-06-02 17:57:38.997029 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-06-02 17:57:38.997033 | orchestrator | 2025-06-02 17:57:38.997037 | orchestrator | 2025-06-02 17:57:38.997041 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:57:38.997045 | orchestrator | Monday 02 June 2025 17:57:37 +0000 (0:00:00.423) 0:09:06.124 *********** 2025-06-02 17:57:38.997049 | orchestrator | =============================================================================== 2025-06-02 17:57:38.997052 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 32.38s 2025-06-02 17:57:38.997056 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 29.44s 2025-06-02 17:57:38.997060 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 27.80s 2025-06-02 17:57:38.997064 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 25.35s 2025-06-02 17:57:38.997067 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 22.38s 2025-06-02 17:57:38.997071 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.21s 2025-06-02 17:57:38.997075 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 21.51s 2025-06-02 17:57:38.997078 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 19.88s 2025-06-02 17:57:38.997082 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 14.47s 2025-06-02 17:57:38.997086 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.39s 2025-06-02 17:57:38.997090 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.26s 2025-06-02 17:57:38.997093 | orchestrator | nova : Restart nova-api container -------------------------------------- 13.09s 2025-06-02 17:57:38.997097 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.95s 2025-06-02 17:57:38.997103 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 10.87s 2025-06-02 17:57:38.997111 | orchestrator | nova-cell : Create cell ------------------------------------------------ 10.54s 2025-06-02 17:57:38.997115 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.28s 2025-06-02 17:57:38.997121 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 10.23s 2025-06-02 17:57:38.997128 | orchestrator | nova : Copying over nova.conf ------------------------------------------- 8.77s 2025-06-02 17:57:38.997134 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.58s 2025-06-02 17:57:38.997139 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 8.49s 2025-06-02 17:57:38.997146 | orchestrator | 2025-06-02 17:57:38 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:57:38.997152 | orchestrator | 2025-06-02 17:57:38 | INFO  | Task 1c77fa8b-2b3e-4daf-89f2-22e8c3dc6560 is in state STARTED 2025-06-02 17:57:38.997159 | orchestrator | 2025-06-02 17:57:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:57:42.046997 | orchestrator | 2025-06-02 17:57:42 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:57:42.047870 | orchestrator | 2025-06-02 17:57:42 | INFO  | Task 1c77fa8b-2b3e-4daf-89f2-22e8c3dc6560 is in state STARTED 2025-06-02 17:57:42.048187 | orchestrator | 2025-06-02 17:57:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:57:45.092416 | orchestrator | 2025-06-02 17:57:45 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:57:45.094339 | orchestrator | 2025-06-02 17:57:45 | INFO  | Task 1c77fa8b-2b3e-4daf-89f2-22e8c3dc6560 is in state STARTED 2025-06-02 17:57:45.094863 | orchestrator | 2025-06-02 17:57:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:57:48.129596 | orchestrator | 2025-06-02 17:57:48 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:57:48.132561 | orchestrator | 2025-06-02 17:57:48 | INFO  | Task 1c77fa8b-2b3e-4daf-89f2-22e8c3dc6560 is in state STARTED 2025-06-02 17:57:48.132806 | orchestrator | 2025-06-02 17:57:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:57:51.184617 | orchestrator | 2025-06-02 17:57:51 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:57:51.186877 | orchestrator | 2025-06-02 17:57:51 | INFO  | Task 1c77fa8b-2b3e-4daf-89f2-22e8c3dc6560 is in state STARTED 2025-06-02 17:57:51.186981 | orchestrator | 2025-06-02 17:57:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:57:54.231620 | orchestrator | 2025-06-02 17:57:54 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:57:54.233932 | orchestrator | 2025-06-02 17:57:54 | INFO  | Task 1c77fa8b-2b3e-4daf-89f2-22e8c3dc6560 is in state STARTED 2025-06-02 17:57:54.234083 | orchestrator | 2025-06-02 17:57:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:57:57.279733 | orchestrator | 2025-06-02 17:57:57 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:57:57.282595 | orchestrator | 2025-06-02 17:57:57 | INFO  | Task 1c77fa8b-2b3e-4daf-89f2-22e8c3dc6560 is in state STARTED 2025-06-02 17:57:57.282654 | orchestrator | 2025-06-02 17:57:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:58:00.331682 | orchestrator | 2025-06-02 17:58:00 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:58:00.339716 | orchestrator | 2025-06-02 17:58:00.339786 | orchestrator | 2025-06-02 17:58:00.339793 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 17:58:00.339799 | orchestrator | 2025-06-02 17:58:00.339804 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 17:58:00.339809 | orchestrator | Monday 02 June 2025 17:55:35 +0000 (0:00:00.271) 0:00:00.271 *********** 2025-06-02 17:58:00.339833 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:58:00.339839 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:58:00.339844 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:58:00.339849 | orchestrator | 2025-06-02 17:58:00.339853 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 17:58:00.339858 | orchestrator | Monday 02 June 2025 17:55:35 +0000 (0:00:00.286) 0:00:00.557 *********** 2025-06-02 17:58:00.339863 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-06-02 17:58:00.339868 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-06-02 17:58:00.339884 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-06-02 17:58:00.339888 | orchestrator | 2025-06-02 17:58:00.339899 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-06-02 17:58:00.339904 | orchestrator | 2025-06-02 17:58:00.339909 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-06-02 17:58:00.339914 | orchestrator | Monday 02 June 2025 17:55:35 +0000 (0:00:00.412) 0:00:00.970 *********** 2025-06-02 17:58:00.339919 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:58:00.339924 | orchestrator | 2025-06-02 17:58:00.339939 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-06-02 17:58:00.339944 | orchestrator | Monday 02 June 2025 17:55:36 +0000 (0:00:00.547) 0:00:01.518 *********** 2025-06-02 17:58:00.339952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 17:58:00.339959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 17:58:00.339965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 17:58:00.339969 | orchestrator | 2025-06-02 17:58:00.339974 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-06-02 17:58:00.339979 | orchestrator | Monday 02 June 2025 17:55:37 +0000 (0:00:00.741) 0:00:02.259 *********** 2025-06-02 17:58:00.339984 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-06-02 17:58:00.340010 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-06-02 17:58:00.340021 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 17:58:00.340026 | orchestrator | 2025-06-02 17:58:00.340031 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-06-02 17:58:00.340035 | orchestrator | Monday 02 June 2025 17:55:38 +0000 (0:00:00.944) 0:00:03.204 *********** 2025-06-02 17:58:00.340040 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:58:00.340059 | orchestrator | 2025-06-02 17:58:00.340065 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-06-02 17:58:00.340070 | orchestrator | Monday 02 June 2025 17:55:38 +0000 (0:00:00.710) 0:00:03.915 *********** 2025-06-02 17:58:00.340086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 17:58:00.340095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 17:58:00.340100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 17:58:00.340105 | orchestrator | 2025-06-02 17:58:00.340110 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-06-02 17:58:00.340114 | orchestrator | Monday 02 June 2025 17:55:40 +0000 (0:00:01.424) 0:00:05.339 *********** 2025-06-02 17:58:00.340119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 17:58:00.340142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 17:58:00.340152 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:58:00.340157 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:58:00.340301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 17:58:00.340317 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:58:00.340324 | orchestrator | 2025-06-02 17:58:00.340332 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-06-02 17:58:00.340339 | orchestrator | Monday 02 June 2025 17:55:40 +0000 (0:00:00.392) 0:00:05.732 *********** 2025-06-02 17:58:00.340352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 17:58:00.340361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 17:58:00.340369 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:58:00.340376 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:58:00.340383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 17:58:00.340391 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:58:00.340398 | orchestrator | 2025-06-02 17:58:00.340405 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-06-02 17:58:00.340421 | orchestrator | Monday 02 June 2025 17:55:41 +0000 (0:00:00.795) 0:00:06.527 *********** 2025-06-02 17:58:00.340429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 17:58:00.340444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 17:58:00.340864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 17:58:00.340876 | orchestrator | 2025-06-02 17:58:00.340881 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-06-02 17:58:00.340886 | orchestrator | Monday 02 June 2025 17:55:42 +0000 (0:00:01.319) 0:00:07.847 *********** 2025-06-02 17:58:00.340896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 17:58:00.340901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 17:58:00.340906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 17:58:00.340917 | orchestrator | 2025-06-02 17:58:00.340922 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-06-02 17:58:00.340926 | orchestrator | Monday 02 June 2025 17:55:44 +0000 (0:00:01.350) 0:00:09.197 *********** 2025-06-02 17:58:00.340931 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:58:00.340936 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:58:00.340940 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:58:00.340945 | orchestrator | 2025-06-02 17:58:00.340949 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-06-02 17:58:00.340954 | orchestrator | Monday 02 June 2025 17:55:44 +0000 (0:00:00.528) 0:00:09.725 *********** 2025-06-02 17:58:00.340958 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-06-02 17:58:00.340964 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-06-02 17:58:00.340968 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-06-02 17:58:00.340973 | orchestrator | 2025-06-02 17:58:00.340977 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-06-02 17:58:00.340982 | orchestrator | Monday 02 June 2025 17:55:46 +0000 (0:00:01.260) 0:00:10.986 *********** 2025-06-02 17:58:00.340987 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-06-02 17:58:00.341010 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-06-02 17:58:00.341016 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-06-02 17:58:00.341021 | orchestrator | 2025-06-02 17:58:00.341025 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-06-02 17:58:00.341030 | orchestrator | Monday 02 June 2025 17:55:47 +0000 (0:00:01.305) 0:00:12.291 *********** 2025-06-02 17:58:00.341035 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 17:58:00.341039 | orchestrator | 2025-06-02 17:58:00.341044 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-06-02 17:58:00.341048 | orchestrator | Monday 02 June 2025 17:55:48 +0000 (0:00:00.791) 0:00:13.082 *********** 2025-06-02 17:58:00.341053 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-06-02 17:58:00.341058 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-06-02 17:58:00.341062 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:58:00.341067 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:58:00.341072 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:58:00.341076 | orchestrator | 2025-06-02 17:58:00.341081 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-06-02 17:58:00.341085 | orchestrator | Monday 02 June 2025 17:55:48 +0000 (0:00:00.693) 0:00:13.776 *********** 2025-06-02 17:58:00.341090 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:58:00.341094 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:58:00.341099 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:58:00.341104 | orchestrator | 2025-06-02 17:58:00.341112 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-06-02 17:58:00.341117 | orchestrator | Monday 02 June 2025 17:55:49 +0000 (0:00:00.535) 0:00:14.311 *********** 2025-06-02 17:58:00.341126 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1076005, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6370518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1076005, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6370518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1076005, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6370518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1075995, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6310518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1075995, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6310518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1075995, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6310518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1075989, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6280518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1075989, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6280518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1075989, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6280518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1076002, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6330519, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1076002, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6330519, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1076002, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6330519, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1075978, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6240516, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1075978, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6240516, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1075978, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6240516, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1075991, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6290517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1075991, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6290517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1075991, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6290517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1075998, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6320517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1075998, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6320517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1075998, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6320517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1075974, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6220517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1075974, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6220517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1075974, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6220517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1075928, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.591051, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341456 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1075928, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.591051, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1075928, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.591051, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341466 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1075980, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6240516, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1075980, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6240516, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1075980, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6240516, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1075958, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6170516, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1075958, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6170516, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1075958, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6170516, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1075997, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6310518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1075997, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6310518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1075997, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6310518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1075983, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6260517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1075983, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6260517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1075983, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6260517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1076003, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6340518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1076003, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6340518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1076003, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6340518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1075970, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6200516, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1075970, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6200516, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1075970, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6200516, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1075992, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6300519, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1075992, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6300519, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1075992, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6300519, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341697 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1075933, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.5960512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1075933, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.5960512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1075933, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.5960512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1075961, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6190517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1075961, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6190517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1075961, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6190517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1075986, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6270516, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1075986, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6270516, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1075986, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6270516, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1076096, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7030528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1076096, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7030528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1076096, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7030528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1076090, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6850526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1076090, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6850526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1076090, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6850526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1076012, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.639052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1076012, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.639052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1076012, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.639052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1076162, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7350533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1076162, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7350533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1076162, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7350533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1076013, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6400518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1076013, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6400518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1076013, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6400518, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.341985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1076139, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7120528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.342003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1076139, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7120528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 2025-06-02 17:58:00 | INFO  | Task 1c77fa8b-2b3e-4daf-89f2-22e8c3dc6560 is in state SUCCESS 2025-06-02 17:58:00.342051 | orchestrator | 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.342063 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1076139, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7120528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.342075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1076229, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7380533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.342084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1076229, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7380533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.342092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1076229, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7380533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.342100 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1076130, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7050529, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.342119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1076130, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7050529, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.342127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1076130, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7050529, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.342139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1076137, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7060528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.342147 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1076137, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7060528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.342155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1076137, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7060528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.342162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1076015, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.642052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.342171 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1076015, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.642052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.342184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1076015, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.642052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.342196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1076093, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6860526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.342203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1076093, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6860526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.342232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1076093, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6860526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.342240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1076234, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7390532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.342253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1076234, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7390532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.342268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1076234, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7390532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.342277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1076159, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7120528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.342289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1076159, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7120528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.342297 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1076056, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6760523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.342305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1076159, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7120528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.342317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1076056, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6760523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.342329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1076020, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.656052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.342337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1076056, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6760523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.342348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1076020, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.656052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.342356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1076085, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6780524, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.342364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1076020, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.656052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.342378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1076085, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6780524, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.342390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1076086, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6840525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.342398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1076085, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6780524, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.342412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1076086, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6840525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.342425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1076094, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6870525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.342432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1076086, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6840525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.342447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1076094, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6870525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.342455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1076135, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7050529, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.342467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1076094, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6870525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.342478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1076135, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7050529, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.342486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1076095, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6880527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.342493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1076135, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7050529, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.342507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1076095, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6880527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.342515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1076095, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.6880527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.342527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1076238, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7400534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.342535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1076238, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7400534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.342545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1076238, 'dev': 134, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1748884039.7400534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:58:00.342553 | orchestrator | 2025-06-02 17:58:00.342561 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-06-02 17:58:00.342574 | orchestrator | Monday 02 June 2025 17:56:28 +0000 (0:00:38.857) 0:00:53.168 *********** 2025-06-02 17:58:00.342582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 17:58:00.342591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 17:58:00.342600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 17:58:00.342608 | orchestrator | 2025-06-02 17:58:00.342616 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-06-02 17:58:00.342625 | orchestrator | Monday 02 June 2025 17:56:29 +0000 (0:00:01.234) 0:00:54.402 *********** 2025-06-02 17:58:00.342637 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:58:00.342647 | orchestrator | 2025-06-02 17:58:00.342655 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-06-02 17:58:00.342663 | orchestrator | Monday 02 June 2025 17:56:32 +0000 (0:00:02.798) 0:00:57.201 *********** 2025-06-02 17:58:00.342670 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:58:00.342678 | orchestrator | 2025-06-02 17:58:00.342686 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-06-02 17:58:00.342693 | orchestrator | Monday 02 June 2025 17:56:34 +0000 (0:00:02.387) 0:00:59.589 *********** 2025-06-02 17:58:00.342701 | orchestrator | 2025-06-02 17:58:00.342709 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-06-02 17:58:00.342717 | orchestrator | Monday 02 June 2025 17:56:34 +0000 (0:00:00.258) 0:00:59.847 *********** 2025-06-02 17:58:00.342726 | orchestrator | 2025-06-02 17:58:00.342734 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-06-02 17:58:00.342742 | orchestrator | Monday 02 June 2025 17:56:34 +0000 (0:00:00.073) 0:00:59.921 *********** 2025-06-02 17:58:00.342750 | orchestrator | 2025-06-02 17:58:00.342758 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-06-02 17:58:00.342766 | orchestrator | Monday 02 June 2025 17:56:35 +0000 (0:00:00.091) 0:01:00.012 *********** 2025-06-02 17:58:00.342774 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:58:00.342782 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:58:00.342791 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:58:00.342805 | orchestrator | 2025-06-02 17:58:00.342814 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-06-02 17:58:00.342826 | orchestrator | Monday 02 June 2025 17:56:42 +0000 (0:00:07.151) 0:01:07.164 *********** 2025-06-02 17:58:00.342835 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:58:00.342843 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:58:00.342851 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-06-02 17:58:00.342860 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-06-02 17:58:00.342868 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-06-02 17:58:00.342875 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:58:00.342881 | orchestrator | 2025-06-02 17:58:00.342888 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-06-02 17:58:00.342895 | orchestrator | Monday 02 June 2025 17:57:20 +0000 (0:00:38.723) 0:01:45.887 *********** 2025-06-02 17:58:00.342901 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:58:00.342908 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:58:00.342916 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:58:00.342923 | orchestrator | 2025-06-02 17:58:00.342930 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-06-02 17:58:00.342938 | orchestrator | Monday 02 June 2025 17:57:52 +0000 (0:00:31.235) 0:02:17.123 *********** 2025-06-02 17:58:00.342945 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:58:00.342952 | orchestrator | 2025-06-02 17:58:00.342960 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-06-02 17:58:00.342968 | orchestrator | Monday 02 June 2025 17:57:54 +0000 (0:00:02.546) 0:02:19.669 *********** 2025-06-02 17:58:00.342975 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:58:00.342983 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:58:00.342991 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:58:00.342998 | orchestrator | 2025-06-02 17:58:00.343006 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-06-02 17:58:00.343013 | orchestrator | Monday 02 June 2025 17:57:54 +0000 (0:00:00.301) 0:02:19.971 *********** 2025-06-02 17:58:00.343023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-06-02 17:58:00.343034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-06-02 17:58:00.343042 | orchestrator | 2025-06-02 17:58:00.343050 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-06-02 17:58:00.343058 | orchestrator | Monday 02 June 2025 17:57:57 +0000 (0:00:02.457) 0:02:22.429 *********** 2025-06-02 17:58:00.343065 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:58:00.343073 | orchestrator | 2025-06-02 17:58:00.343080 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:58:00.343089 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-02 17:58:00.343100 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-02 17:58:00.343107 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-02 17:58:00.343121 | orchestrator | 2025-06-02 17:58:00.343128 | orchestrator | 2025-06-02 17:58:00.343136 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:58:00.343143 | orchestrator | Monday 02 June 2025 17:57:57 +0000 (0:00:00.272) 0:02:22.701 *********** 2025-06-02 17:58:00.343157 | orchestrator | =============================================================================== 2025-06-02 17:58:00.343165 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 38.86s 2025-06-02 17:58:00.343173 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.72s 2025-06-02 17:58:00.343180 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 31.24s 2025-06-02 17:58:00.343187 | orchestrator | grafana : Restart first grafana container ------------------------------- 7.15s 2025-06-02 17:58:00.343194 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.80s 2025-06-02 17:58:00.343201 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.55s 2025-06-02 17:58:00.343259 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.46s 2025-06-02 17:58:00.343268 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.39s 2025-06-02 17:58:00.343276 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.42s 2025-06-02 17:58:00.343284 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.35s 2025-06-02 17:58:00.343291 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.32s 2025-06-02 17:58:00.343299 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.31s 2025-06-02 17:58:00.343311 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.26s 2025-06-02 17:58:00.343319 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.23s 2025-06-02 17:58:00.343327 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.94s 2025-06-02 17:58:00.343334 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.80s 2025-06-02 17:58:00.343342 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.79s 2025-06-02 17:58:00.343349 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.74s 2025-06-02 17:58:00.343357 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.71s 2025-06-02 17:58:00.343365 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.69s 2025-06-02 17:58:00.343372 | orchestrator | 2025-06-02 17:58:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:58:03.388739 | orchestrator | 2025-06-02 17:58:03 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:58:03.388835 | orchestrator | 2025-06-02 17:58:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:58:06.442570 | orchestrator | 2025-06-02 17:58:06 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:58:06.442674 | orchestrator | 2025-06-02 17:58:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:58:09.494351 | orchestrator | 2025-06-02 17:58:09 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:58:09.494489 | orchestrator | 2025-06-02 17:58:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:58:12.546898 | orchestrator | 2025-06-02 17:58:12 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:58:12.547006 | orchestrator | 2025-06-02 17:58:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:58:15.596781 | orchestrator | 2025-06-02 17:58:15 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:58:15.596916 | orchestrator | 2025-06-02 17:58:15 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:58:18.636758 | orchestrator | 2025-06-02 17:58:18 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:58:18.636864 | orchestrator | 2025-06-02 17:58:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:58:21.678202 | orchestrator | 2025-06-02 17:58:21 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:58:21.678313 | orchestrator | 2025-06-02 17:58:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:58:24.727331 | orchestrator | 2025-06-02 17:58:24 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:58:24.727439 | orchestrator | 2025-06-02 17:58:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:58:27.761878 | orchestrator | 2025-06-02 17:58:27 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:58:27.761971 | orchestrator | 2025-06-02 17:58:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:58:30.806197 | orchestrator | 2025-06-02 17:58:30 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:58:30.806354 | orchestrator | 2025-06-02 17:58:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:58:33.853328 | orchestrator | 2025-06-02 17:58:33 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:58:33.853440 | orchestrator | 2025-06-02 17:58:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:58:36.895954 | orchestrator | 2025-06-02 17:58:36 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:58:36.896512 | orchestrator | 2025-06-02 17:58:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:58:39.947692 | orchestrator | 2025-06-02 17:58:39 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:58:39.947764 | orchestrator | 2025-06-02 17:58:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:58:43.015037 | orchestrator | 2025-06-02 17:58:43 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:58:43.015116 | orchestrator | 2025-06-02 17:58:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:58:46.043897 | orchestrator | 2025-06-02 17:58:46 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:58:46.044971 | orchestrator | 2025-06-02 17:58:46 | INFO  | Task 5c4442ae-07fb-4f07-95e1-d717fb9fa089 is in state STARTED 2025-06-02 17:58:46.045005 | orchestrator | 2025-06-02 17:58:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:58:49.095923 | orchestrator | 2025-06-02 17:58:49 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:58:49.096062 | orchestrator | 2025-06-02 17:58:49 | INFO  | Task 5c4442ae-07fb-4f07-95e1-d717fb9fa089 is in state STARTED 2025-06-02 17:58:49.096310 | orchestrator | 2025-06-02 17:58:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:58:52.137794 | orchestrator | 2025-06-02 17:58:52 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:58:52.138119 | orchestrator | 2025-06-02 17:58:52 | INFO  | Task 5c4442ae-07fb-4f07-95e1-d717fb9fa089 is in state STARTED 2025-06-02 17:58:52.138141 | orchestrator | 2025-06-02 17:58:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:58:55.183294 | orchestrator | 2025-06-02 17:58:55 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:58:55.183367 | orchestrator | 2025-06-02 17:58:55 | INFO  | Task 5c4442ae-07fb-4f07-95e1-d717fb9fa089 is in state STARTED 2025-06-02 17:58:55.183373 | orchestrator | 2025-06-02 17:58:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:58:58.230600 | orchestrator | 2025-06-02 17:58:58 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:58:58.232442 | orchestrator | 2025-06-02 17:58:58 | INFO  | Task 5c4442ae-07fb-4f07-95e1-d717fb9fa089 is in state STARTED 2025-06-02 17:58:58.233110 | orchestrator | 2025-06-02 17:58:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:59:01.282878 | orchestrator | 2025-06-02 17:59:01 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:59:01.283264 | orchestrator | 2025-06-02 17:59:01 | INFO  | Task 5c4442ae-07fb-4f07-95e1-d717fb9fa089 is in state SUCCESS 2025-06-02 17:59:01.283297 | orchestrator | 2025-06-02 17:59:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:59:04.327310 | orchestrator | 2025-06-02 17:59:04 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:59:04.327410 | orchestrator | 2025-06-02 17:59:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:59:07.375906 | orchestrator | 2025-06-02 17:59:07 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:59:07.376041 | orchestrator | 2025-06-02 17:59:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:59:10.419645 | orchestrator | 2025-06-02 17:59:10 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:59:10.419779 | orchestrator | 2025-06-02 17:59:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:59:13.455409 | orchestrator | 2025-06-02 17:59:13 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:59:13.455535 | orchestrator | 2025-06-02 17:59:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:59:16.500980 | orchestrator | 2025-06-02 17:59:16 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:59:16.501055 | orchestrator | 2025-06-02 17:59:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:59:19.549329 | orchestrator | 2025-06-02 17:59:19 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:59:19.549439 | orchestrator | 2025-06-02 17:59:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:59:22.595944 | orchestrator | 2025-06-02 17:59:22 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:59:22.596022 | orchestrator | 2025-06-02 17:59:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:59:25.637110 | orchestrator | 2025-06-02 17:59:25 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:59:25.637269 | orchestrator | 2025-06-02 17:59:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:59:28.682764 | orchestrator | 2025-06-02 17:59:28 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:59:28.682892 | orchestrator | 2025-06-02 17:59:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:59:31.728741 | orchestrator | 2025-06-02 17:59:31 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:59:31.728828 | orchestrator | 2025-06-02 17:59:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:59:34.776670 | orchestrator | 2025-06-02 17:59:34 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:59:34.776769 | orchestrator | 2025-06-02 17:59:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:59:37.815355 | orchestrator | 2025-06-02 17:59:37 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:59:37.815450 | orchestrator | 2025-06-02 17:59:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:59:40.867577 | orchestrator | 2025-06-02 17:59:40 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:59:40.867679 | orchestrator | 2025-06-02 17:59:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:59:43.905299 | orchestrator | 2025-06-02 17:59:43 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:59:43.905538 | orchestrator | 2025-06-02 17:59:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:59:46.943961 | orchestrator | 2025-06-02 17:59:46 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:59:46.944088 | orchestrator | 2025-06-02 17:59:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:59:49.988677 | orchestrator | 2025-06-02 17:59:49 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:59:49.988791 | orchestrator | 2025-06-02 17:59:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:59:53.052645 | orchestrator | 2025-06-02 17:59:53 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:59:53.052745 | orchestrator | 2025-06-02 17:59:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:59:56.092625 | orchestrator | 2025-06-02 17:59:56 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:59:56.092731 | orchestrator | 2025-06-02 17:59:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:59:59.142879 | orchestrator | 2025-06-02 17:59:59 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 17:59:59.142957 | orchestrator | 2025-06-02 17:59:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 18:00:02.188579 | orchestrator | 2025-06-02 18:00:02 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 18:00:02.188677 | orchestrator | 2025-06-02 18:00:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 18:00:05.242670 | orchestrator | 2025-06-02 18:00:05 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 18:00:05.242735 | orchestrator | 2025-06-02 18:00:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 18:00:08.291115 | orchestrator | 2025-06-02 18:00:08 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 18:00:08.291288 | orchestrator | 2025-06-02 18:00:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 18:00:11.344663 | orchestrator | 2025-06-02 18:00:11 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 18:00:11.344758 | orchestrator | 2025-06-02 18:00:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 18:00:14.386957 | orchestrator | 2025-06-02 18:00:14 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 18:00:14.387039 | orchestrator | 2025-06-02 18:00:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 18:00:17.432997 | orchestrator | 2025-06-02 18:00:17 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 18:00:17.433086 | orchestrator | 2025-06-02 18:00:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 18:00:20.478741 | orchestrator | 2025-06-02 18:00:20 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 18:00:20.478847 | orchestrator | 2025-06-02 18:00:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 18:00:23.525117 | orchestrator | 2025-06-02 18:00:23 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 18:00:23.525252 | orchestrator | 2025-06-02 18:00:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 18:00:26.566914 | orchestrator | 2025-06-02 18:00:26 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 18:00:26.567045 | orchestrator | 2025-06-02 18:00:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 18:00:29.612475 | orchestrator | 2025-06-02 18:00:29 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state STARTED 2025-06-02 18:00:29.612606 | orchestrator | 2025-06-02 18:00:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 18:00:32.660362 | orchestrator | 2025-06-02 18:00:32 | INFO  | Task bf6c0224-e885-4249-b191-8cdeae093de2 is in state SUCCESS 2025-06-02 18:00:32.662270 | orchestrator | 2025-06-02 18:00:32.662367 | orchestrator | None 2025-06-02 18:00:32.662382 | orchestrator | 2025-06-02 18:00:32.662395 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 18:00:32.662406 | orchestrator | 2025-06-02 18:00:32.662418 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 18:00:32.662454 | orchestrator | Monday 02 June 2025 17:55:41 +0000 (0:00:00.265) 0:00:00.265 *********** 2025-06-02 18:00:32.662466 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:00:32.662478 | orchestrator | ok: [testbed-node-1] 2025-06-02 18:00:32.662489 | orchestrator | ok: [testbed-node-2] 2025-06-02 18:00:32.662500 | orchestrator | 2025-06-02 18:00:32.662511 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 18:00:32.662523 | orchestrator | Monday 02 June 2025 17:55:42 +0000 (0:00:00.291) 0:00:00.557 *********** 2025-06-02 18:00:32.662534 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-06-02 18:00:32.662545 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-06-02 18:00:32.662556 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-06-02 18:00:32.662567 | orchestrator | 2025-06-02 18:00:32.662578 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-06-02 18:00:32.662589 | orchestrator | 2025-06-02 18:00:32.662600 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-02 18:00:32.662611 | orchestrator | Monday 02 June 2025 17:55:42 +0000 (0:00:00.434) 0:00:00.992 *********** 2025-06-02 18:00:32.662622 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 18:00:32.662634 | orchestrator | 2025-06-02 18:00:32.662645 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-06-02 18:00:32.662656 | orchestrator | Monday 02 June 2025 17:55:43 +0000 (0:00:00.605) 0:00:01.597 *********** 2025-06-02 18:00:32.662667 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-06-02 18:00:32.662678 | orchestrator | 2025-06-02 18:00:32.662688 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-06-02 18:00:32.662699 | orchestrator | Monday 02 June 2025 17:55:46 +0000 (0:00:03.517) 0:00:05.115 *********** 2025-06-02 18:00:32.662710 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-06-02 18:00:32.662721 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-06-02 18:00:32.662732 | orchestrator | 2025-06-02 18:00:32.662743 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-06-02 18:00:32.662754 | orchestrator | Monday 02 June 2025 17:55:53 +0000 (0:00:06.964) 0:00:12.079 *********** 2025-06-02 18:00:32.662765 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 18:00:32.662776 | orchestrator | 2025-06-02 18:00:32.662787 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-06-02 18:00:32.662798 | orchestrator | Monday 02 June 2025 17:55:57 +0000 (0:00:03.524) 0:00:15.604 *********** 2025-06-02 18:00:32.662809 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 18:00:32.662820 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-06-02 18:00:32.662857 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-06-02 18:00:32.662869 | orchestrator | 2025-06-02 18:00:32.662880 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-06-02 18:00:32.662890 | orchestrator | Monday 02 June 2025 17:56:05 +0000 (0:00:08.334) 0:00:23.939 *********** 2025-06-02 18:00:32.662902 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 18:00:32.662913 | orchestrator | 2025-06-02 18:00:32.662924 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-06-02 18:00:32.662935 | orchestrator | Monday 02 June 2025 17:56:09 +0000 (0:00:03.610) 0:00:27.550 *********** 2025-06-02 18:00:32.662946 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-06-02 18:00:32.662957 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-06-02 18:00:32.662968 | orchestrator | 2025-06-02 18:00:32.662978 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-06-02 18:00:32.662989 | orchestrator | Monday 02 June 2025 17:56:16 +0000 (0:00:07.680) 0:00:35.230 *********** 2025-06-02 18:00:32.663000 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-06-02 18:00:32.663011 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-06-02 18:00:32.663022 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-06-02 18:00:32.663033 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-06-02 18:00:32.663044 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-06-02 18:00:32.663054 | orchestrator | 2025-06-02 18:00:32.663065 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-02 18:00:32.663076 | orchestrator | Monday 02 June 2025 17:56:33 +0000 (0:00:16.708) 0:00:51.939 *********** 2025-06-02 18:00:32.663087 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 18:00:32.663098 | orchestrator | 2025-06-02 18:00:32.663109 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-06-02 18:00:32.663120 | orchestrator | Monday 02 June 2025 17:56:34 +0000 (0:00:00.548) 0:00:52.488 *********** 2025-06-02 18:00:32.663164 | orchestrator | changed: [testbed-node-0] 2025-06-02 18:00:32.663175 | orchestrator | 2025-06-02 18:00:32.663186 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2025-06-02 18:00:32.663197 | orchestrator | Monday 02 June 2025 17:56:39 +0000 (0:00:05.091) 0:00:57.579 *********** 2025-06-02 18:00:32.663208 | orchestrator | changed: [testbed-node-0] 2025-06-02 18:00:32.663220 | orchestrator | 2025-06-02 18:00:32.663231 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-06-02 18:00:32.663350 | orchestrator | Monday 02 June 2025 17:56:43 +0000 (0:00:03.794) 0:01:01.373 *********** 2025-06-02 18:00:32.663367 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:00:32.663379 | orchestrator | 2025-06-02 18:00:32.663390 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2025-06-02 18:00:32.663408 | orchestrator | Monday 02 June 2025 17:56:46 +0000 (0:00:03.341) 0:01:04.715 *********** 2025-06-02 18:00:32.663420 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-06-02 18:00:32.663431 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-06-02 18:00:32.663441 | orchestrator | 2025-06-02 18:00:32.663452 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2025-06-02 18:00:32.663463 | orchestrator | Monday 02 June 2025 17:56:56 +0000 (0:00:10.228) 0:01:14.943 *********** 2025-06-02 18:00:32.663473 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2025-06-02 18:00:32.663485 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2025-06-02 18:00:32.663497 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2025-06-02 18:00:32.663519 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2025-06-02 18:00:32.663530 | orchestrator | 2025-06-02 18:00:32.663541 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2025-06-02 18:00:32.663552 | orchestrator | Monday 02 June 2025 17:57:13 +0000 (0:00:16.630) 0:01:31.574 *********** 2025-06-02 18:00:32.663563 | orchestrator | changed: [testbed-node-0] 2025-06-02 18:00:32.663573 | orchestrator | 2025-06-02 18:00:32.663584 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2025-06-02 18:00:32.663595 | orchestrator | Monday 02 June 2025 17:57:17 +0000 (0:00:04.682) 0:01:36.256 *********** 2025-06-02 18:00:32.663606 | orchestrator | changed: [testbed-node-0] 2025-06-02 18:00:32.663617 | orchestrator | 2025-06-02 18:00:32.663632 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2025-06-02 18:00:32.663643 | orchestrator | Monday 02 June 2025 17:57:23 +0000 (0:00:05.268) 0:01:41.525 *********** 2025-06-02 18:00:32.663654 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:00:32.663665 | orchestrator | 2025-06-02 18:00:32.663676 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2025-06-02 18:00:32.663686 | orchestrator | Monday 02 June 2025 17:57:23 +0000 (0:00:00.216) 0:01:41.741 *********** 2025-06-02 18:00:32.663697 | orchestrator | changed: [testbed-node-0] 2025-06-02 18:00:32.663708 | orchestrator | 2025-06-02 18:00:32.663719 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-02 18:00:32.663729 | orchestrator | Monday 02 June 2025 17:57:29 +0000 (0:00:05.630) 0:01:47.371 *********** 2025-06-02 18:00:32.663740 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 18:00:32.663751 | orchestrator | 2025-06-02 18:00:32.663761 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2025-06-02 18:00:32.663772 | orchestrator | Monday 02 June 2025 17:57:30 +0000 (0:00:01.239) 0:01:48.611 *********** 2025-06-02 18:00:32.663783 | orchestrator | changed: [testbed-node-2] 2025-06-02 18:00:32.663794 | orchestrator | changed: [testbed-node-0] 2025-06-02 18:00:32.663804 | orchestrator | changed: [testbed-node-1] 2025-06-02 18:00:32.663815 | orchestrator | 2025-06-02 18:00:32.663826 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2025-06-02 18:00:32.663838 | orchestrator | Monday 02 June 2025 17:57:36 +0000 (0:00:06.065) 0:01:54.676 *********** 2025-06-02 18:00:32.663849 | orchestrator | changed: [testbed-node-2] 2025-06-02 18:00:32.663860 | orchestrator | changed: [testbed-node-1] 2025-06-02 18:00:32.663870 | orchestrator | changed: [testbed-node-0] 2025-06-02 18:00:32.663881 | orchestrator | 2025-06-02 18:00:32.663892 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2025-06-02 18:00:32.663903 | orchestrator | Monday 02 June 2025 17:57:40 +0000 (0:00:04.440) 0:01:59.117 *********** 2025-06-02 18:00:32.663913 | orchestrator | changed: [testbed-node-0] 2025-06-02 18:00:32.663924 | orchestrator | changed: [testbed-node-1] 2025-06-02 18:00:32.663935 | orchestrator | changed: [testbed-node-2] 2025-06-02 18:00:32.663946 | orchestrator | 2025-06-02 18:00:32.663956 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2025-06-02 18:00:32.663967 | orchestrator | Monday 02 June 2025 17:57:41 +0000 (0:00:00.756) 0:01:59.873 *********** 2025-06-02 18:00:32.663978 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:00:32.663989 | orchestrator | ok: [testbed-node-1] 2025-06-02 18:00:32.663999 | orchestrator | ok: [testbed-node-2] 2025-06-02 18:00:32.664010 | orchestrator | 2025-06-02 18:00:32.664021 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2025-06-02 18:00:32.664032 | orchestrator | Monday 02 June 2025 17:57:43 +0000 (0:00:02.107) 0:02:01.981 *********** 2025-06-02 18:00:32.664042 | orchestrator | changed: [testbed-node-0] 2025-06-02 18:00:32.664053 | orchestrator | changed: [testbed-node-1] 2025-06-02 18:00:32.664071 | orchestrator | changed: [testbed-node-2] 2025-06-02 18:00:32.664081 | orchestrator | 2025-06-02 18:00:32.664092 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2025-06-02 18:00:32.664103 | orchestrator | Monday 02 June 2025 17:57:45 +0000 (0:00:01.348) 0:02:03.329 *********** 2025-06-02 18:00:32.664114 | orchestrator | changed: [testbed-node-0] 2025-06-02 18:00:32.664158 | orchestrator | changed: [testbed-node-1] 2025-06-02 18:00:32.664171 | orchestrator | changed: [testbed-node-2] 2025-06-02 18:00:32.664182 | orchestrator | 2025-06-02 18:00:32.664192 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2025-06-02 18:00:32.664203 | orchestrator | Monday 02 June 2025 17:57:46 +0000 (0:00:01.248) 0:02:04.577 *********** 2025-06-02 18:00:32.664214 | orchestrator | changed: [testbed-node-1] 2025-06-02 18:00:32.664225 | orchestrator | changed: [testbed-node-0] 2025-06-02 18:00:32.664236 | orchestrator | changed: [testbed-node-2] 2025-06-02 18:00:32.664247 | orchestrator | 2025-06-02 18:00:32.664302 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2025-06-02 18:00:32.664315 | orchestrator | Monday 02 June 2025 17:57:48 +0000 (0:00:01.951) 0:02:06.529 *********** 2025-06-02 18:00:32.664327 | orchestrator | changed: [testbed-node-0] 2025-06-02 18:00:32.664344 | orchestrator | changed: [testbed-node-1] 2025-06-02 18:00:32.664355 | orchestrator | changed: [testbed-node-2] 2025-06-02 18:00:32.664366 | orchestrator | 2025-06-02 18:00:32.664377 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2025-06-02 18:00:32.664388 | orchestrator | Monday 02 June 2025 17:57:50 +0000 (0:00:01.827) 0:02:08.356 *********** 2025-06-02 18:00:32.664399 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:00:32.664410 | orchestrator | ok: [testbed-node-1] 2025-06-02 18:00:32.664421 | orchestrator | ok: [testbed-node-2] 2025-06-02 18:00:32.664432 | orchestrator | 2025-06-02 18:00:32.664443 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2025-06-02 18:00:32.664454 | orchestrator | Monday 02 June 2025 17:57:50 +0000 (0:00:00.633) 0:02:08.990 *********** 2025-06-02 18:00:32.664464 | orchestrator | ok: [testbed-node-1] 2025-06-02 18:00:32.664475 | orchestrator | ok: [testbed-node-2] 2025-06-02 18:00:32.664486 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:00:32.664497 | orchestrator | 2025-06-02 18:00:32.664508 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-02 18:00:32.664519 | orchestrator | Monday 02 June 2025 17:57:53 +0000 (0:00:02.942) 0:02:11.932 *********** 2025-06-02 18:00:32.664530 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 18:00:32.664541 | orchestrator | 2025-06-02 18:00:32.664552 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2025-06-02 18:00:32.664563 | orchestrator | Monday 02 June 2025 17:57:54 +0000 (0:00:00.691) 0:02:12.624 *********** 2025-06-02 18:00:32.664574 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:00:32.664584 | orchestrator | 2025-06-02 18:00:32.664595 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-06-02 18:00:32.664606 | orchestrator | Monday 02 June 2025 17:57:58 +0000 (0:00:04.042) 0:02:16.666 *********** 2025-06-02 18:00:32.664617 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:00:32.664628 | orchestrator | 2025-06-02 18:00:32.664639 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2025-06-02 18:00:32.664650 | orchestrator | Monday 02 June 2025 17:58:01 +0000 (0:00:03.336) 0:02:20.002 *********** 2025-06-02 18:00:32.664661 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-06-02 18:00:32.664672 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-06-02 18:00:32.664682 | orchestrator | 2025-06-02 18:00:32.664693 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2025-06-02 18:00:32.664704 | orchestrator | Monday 02 June 2025 17:58:08 +0000 (0:00:06.683) 0:02:26.686 *********** 2025-06-02 18:00:32.664715 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:00:32.664726 | orchestrator | 2025-06-02 18:00:32.664737 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2025-06-02 18:00:32.664754 | orchestrator | Monday 02 June 2025 17:58:11 +0000 (0:00:03.470) 0:02:30.156 *********** 2025-06-02 18:00:32.664765 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:00:32.664776 | orchestrator | ok: [testbed-node-1] 2025-06-02 18:00:32.664787 | orchestrator | ok: [testbed-node-2] 2025-06-02 18:00:32.664798 | orchestrator | 2025-06-02 18:00:32.664809 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2025-06-02 18:00:32.664820 | orchestrator | Monday 02 June 2025 17:58:12 +0000 (0:00:00.351) 0:02:30.508 *********** 2025-06-02 18:00:32.664834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 18:00:32.664885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 18:00:32.664904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 18:00:32.664917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 18:00:32.664937 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 18:00:32.664949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 18:00:32.664961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 18:00:32.664973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 18:00:32.665020 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 18:00:32.665035 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 18:00:32.665048 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 18:00:32.665065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 18:00:32.665077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 18:00:32.665089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 18:00:32.665182 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 18:00:32.665198 | orchestrator | 2025-06-02 18:00:32.665210 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2025-06-02 18:00:32.665227 | orchestrator | Monday 02 June 2025 17:58:14 +0000 (0:00:02.674) 0:02:33.182 *********** 2025-06-02 18:00:32.665239 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:00:32.665250 | orchestrator | 2025-06-02 18:00:32.665261 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2025-06-02 18:00:32.665272 | orchestrator | Monday 02 June 2025 17:58:15 +0000 (0:00:00.355) 0:02:33.538 *********** 2025-06-02 18:00:32.665283 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:00:32.665294 | orchestrator | skipping: [testbed-node-1] 2025-06-02 18:00:32.665305 | orchestrator | skipping: [testbed-node-2] 2025-06-02 18:00:32.665315 | orchestrator | 2025-06-02 18:00:32.665326 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2025-06-02 18:00:32.665337 | orchestrator | Monday 02 June 2025 17:58:15 +0000 (0:00:00.317) 0:02:33.855 *********** 2025-06-02 18:00:32.665349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 18:00:32.665370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 18:00:32.665381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 18:00:32.665393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 18:00:32.665405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 18:00:32.665416 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:00:32.665469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 18:00:32.665496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 18:00:32.665508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 18:00:32.665519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 18:00:32.665531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 18:00:32.665542 | orchestrator | skipping: [testbed-node-1] 2025-06-02 18:00:32.665589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 18:00:32.665603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 18:00:32.665623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 18:00:32.665644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 18:00:32.665666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 18:00:32.665691 | orchestrator | skipping: [testbed-node-2] 2025-06-02 18:00:32.665715 | orchestrator | 2025-06-02 18:00:32.665732 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-02 18:00:32.665749 | orchestrator | Monday 02 June 2025 17:58:16 +0000 (0:00:00.682) 0:02:34.538 *********** 2025-06-02 18:00:32.665764 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 18:00:32.665780 | orchestrator | 2025-06-02 18:00:32.665797 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2025-06-02 18:00:32.665816 | orchestrator | Monday 02 June 2025 17:58:16 +0000 (0:00:00.537) 0:02:35.076 *********** 2025-06-02 18:00:32.665832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 18:00:32.665908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 18:00:32.665938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 18:00:32.665953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 18:00:32.665968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 18:00:32.665982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 18:00:32.665997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 18:00:32.666073 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 18:00:32.666101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 18:00:32.666117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 18:00:32.666159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 18:00:32.666175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 18:00:32.666191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 18:00:32.666218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 18:00:32.666251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 18:00:32.666267 | orchestrator | 2025-06-02 18:00:32.666282 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2025-06-02 18:00:32.666297 | orchestrator | Monday 02 June 2025 17:58:22 +0000 (0:00:05.411) 0:02:40.487 *********** 2025-06-02 18:00:32.666312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 18:00:32.666329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 18:00:32.666345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 18:00:32.666360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 18:00:32.666392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 18:00:32.666414 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:00:32.666432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 18:00:32.666449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 18:00:32.666466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 18:00:32.666483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 18:00:32.666500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 18:00:32.666525 | orchestrator | skipping: [testbed-node-1] 2025-06-02 18:00:32.666557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 18:00:32.666575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 18:00:32.666590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 18:00:32.666605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 18:00:32.666624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 18:00:32.666640 | orchestrator | skipping: [testbed-node-2] 2025-06-02 18:00:32.666658 | orchestrator | 2025-06-02 18:00:32.666673 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2025-06-02 18:00:32.666689 | orchestrator | Monday 02 June 2025 17:58:22 +0000 (0:00:00.634) 0:02:41.122 *********** 2025-06-02 18:00:32.666706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 18:00:32.666748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 18:00:32.666767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 18:00:32.666785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 18:00:32.666802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 18:00:32.666816 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:00:32.666826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 18:00:32.666848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 18:00:32.666872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 18:00:32.666883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 18:00:32.666893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 18:00:32.666903 | orchestrator | skipping: [testbed-node-1] 2025-06-02 18:00:32.666913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 18:00:32.666923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 18:00:32.666939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 18:00:32.666962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 18:00:32.666973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 18:00:32.666983 | orchestrator | skipping: [testbed-node-2] 2025-06-02 18:00:32.666993 | orchestrator | 2025-06-02 18:00:32.667003 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2025-06-02 18:00:32.667012 | orchestrator | Monday 02 June 2025 17:58:23 +0000 (0:00:01.043) 0:02:42.165 *********** 2025-06-02 18:00:32.667023 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 18:00:32.667034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 18:00:32.667051 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 18:00:32.667073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 18:00:32.667083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 18:00:32.667093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 18:00:32.667104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 18:00:32.667114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 18:00:32.667164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 18:00:32.667183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 18:00:32.667217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 18:00:32.667236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 18:00:32.667255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 18:00:32.667272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 18:00:32.667297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 18:00:32.667308 | orchestrator | 2025-06-02 18:00:32.667318 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2025-06-02 18:00:32.667328 | orchestrator | Monday 02 June 2025 17:58:29 +0000 (0:00:05.331) 0:02:47.497 *********** 2025-06-02 18:00:32.667338 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-06-02 18:00:32.667348 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-06-02 18:00:32.667358 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-06-02 18:00:32.667368 | orchestrator | 2025-06-02 18:00:32.667377 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2025-06-02 18:00:32.667387 | orchestrator | Monday 02 June 2025 17:58:30 +0000 (0:00:01.650) 0:02:49.147 *********** 2025-06-02 18:00:32.667408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 18:00:32.667420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 18:00:32.667430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 18:00:32.667448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 18:00:32.667466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 18:00:32.667481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 18:00:32.667508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 18:00:32.667520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 18:00:32.667530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 18:00:32.667547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 18:00:32.667557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 18:00:32.667567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 18:00:32.667577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 18:00:32.667597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 18:00:32.667608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 18:00:32.667618 | orchestrator | 2025-06-02 18:00:32.667629 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2025-06-02 18:00:32.667638 | orchestrator | Monday 02 June 2025 17:58:47 +0000 (0:00:16.479) 0:03:05.627 *********** 2025-06-02 18:00:32.667661 | orchestrator | changed: [testbed-node-0] 2025-06-02 18:00:32.667678 | orchestrator | changed: [testbed-node-1] 2025-06-02 18:00:32.667702 | orchestrator | changed: [testbed-node-2] 2025-06-02 18:00:32.667720 | orchestrator | 2025-06-02 18:00:32.667735 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2025-06-02 18:00:32.667751 | orchestrator | Monday 02 June 2025 17:58:48 +0000 (0:00:01.478) 0:03:07.105 *********** 2025-06-02 18:00:32.667766 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-06-02 18:00:32.667781 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-06-02 18:00:32.667795 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-06-02 18:00:32.667809 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-06-02 18:00:32.667823 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-06-02 18:00:32.667837 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-06-02 18:00:32.667852 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-06-02 18:00:32.667867 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-06-02 18:00:32.667881 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-06-02 18:00:32.667896 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-06-02 18:00:32.667911 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-06-02 18:00:32.667927 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-06-02 18:00:32.667942 | orchestrator | 2025-06-02 18:00:32.667957 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2025-06-02 18:00:32.667972 | orchestrator | Monday 02 June 2025 17:58:54 +0000 (0:00:05.550) 0:03:12.656 *********** 2025-06-02 18:00:32.667986 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-06-02 18:00:32.668001 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-06-02 18:00:32.668015 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-06-02 18:00:32.668031 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-06-02 18:00:32.668046 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-06-02 18:00:32.668062 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-06-02 18:00:32.668077 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-06-02 18:00:32.668093 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-06-02 18:00:32.668109 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-06-02 18:00:32.668199 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-06-02 18:00:32.668222 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-06-02 18:00:32.668235 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-06-02 18:00:32.668245 | orchestrator | 2025-06-02 18:00:32.668255 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2025-06-02 18:00:32.668265 | orchestrator | Monday 02 June 2025 17:58:59 +0000 (0:00:05.124) 0:03:17.780 *********** 2025-06-02 18:00:32.668275 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-06-02 18:00:32.668284 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-06-02 18:00:32.668294 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-06-02 18:00:32.668304 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-06-02 18:00:32.668314 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-06-02 18:00:32.668323 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-06-02 18:00:32.668333 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-06-02 18:00:32.668359 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-06-02 18:00:32.668375 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-06-02 18:00:32.668407 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-06-02 18:00:32.668430 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-06-02 18:00:32.668445 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-06-02 18:00:32.668461 | orchestrator | 2025-06-02 18:00:32.668478 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2025-06-02 18:00:32.668494 | orchestrator | Monday 02 June 2025 17:59:04 +0000 (0:00:05.370) 0:03:23.151 *********** 2025-06-02 18:00:32.668512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 18:00:32.668524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 18:00:32.668535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 18:00:32.668545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 18:00:32.668567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 18:00:32.668581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 18:00:32.668590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 18:00:32.668601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 18:00:32.668615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 18:00:32.668630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 18:00:32.668643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 18:00:32.668678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 18:00:32.668690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 18:00:32.668698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 18:00:32.668706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 18:00:32.668714 | orchestrator | 2025-06-02 18:00:32.668722 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-02 18:00:32.668730 | orchestrator | Monday 02 June 2025 17:59:08 +0000 (0:00:03.649) 0:03:26.800 *********** 2025-06-02 18:00:32.668739 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:00:32.668746 | orchestrator | skipping: [testbed-node-1] 2025-06-02 18:00:32.668754 | orchestrator | skipping: [testbed-node-2] 2025-06-02 18:00:32.668762 | orchestrator | 2025-06-02 18:00:32.668770 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2025-06-02 18:00:32.668778 | orchestrator | Monday 02 June 2025 17:59:08 +0000 (0:00:00.312) 0:03:27.113 *********** 2025-06-02 18:00:32.668786 | orchestrator | changed: [testbed-node-0] 2025-06-02 18:00:32.668794 | orchestrator | 2025-06-02 18:00:32.668802 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2025-06-02 18:00:32.668810 | orchestrator | Monday 02 June 2025 17:59:10 +0000 (0:00:02.150) 0:03:29.263 *********** 2025-06-02 18:00:32.668818 | orchestrator | changed: [testbed-node-0] 2025-06-02 18:00:32.668826 | orchestrator | 2025-06-02 18:00:32.668834 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2025-06-02 18:00:32.668841 | orchestrator | Monday 02 June 2025 17:59:13 +0000 (0:00:02.794) 0:03:32.057 *********** 2025-06-02 18:00:32.668854 | orchestrator | changed: [testbed-node-0] 2025-06-02 18:00:32.668862 | orchestrator | 2025-06-02 18:00:32.668870 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2025-06-02 18:00:32.668879 | orchestrator | Monday 02 June 2025 17:59:15 +0000 (0:00:02.120) 0:03:34.178 *********** 2025-06-02 18:00:32.668886 | orchestrator | changed: [testbed-node-0] 2025-06-02 18:00:32.668894 | orchestrator | 2025-06-02 18:00:32.668902 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2025-06-02 18:00:32.668910 | orchestrator | Monday 02 June 2025 17:59:18 +0000 (0:00:02.176) 0:03:36.354 *********** 2025-06-02 18:00:32.668918 | orchestrator | changed: [testbed-node-0] 2025-06-02 18:00:32.668925 | orchestrator | 2025-06-02 18:00:32.668933 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-06-02 18:00:32.668941 | orchestrator | Monday 02 June 2025 17:59:39 +0000 (0:00:21.079) 0:03:57.434 *********** 2025-06-02 18:00:32.668949 | orchestrator | 2025-06-02 18:00:32.668957 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-06-02 18:00:32.668965 | orchestrator | Monday 02 June 2025 17:59:39 +0000 (0:00:00.068) 0:03:57.502 *********** 2025-06-02 18:00:32.668973 | orchestrator | 2025-06-02 18:00:32.668981 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-06-02 18:00:32.668989 | orchestrator | Monday 02 June 2025 17:59:39 +0000 (0:00:00.064) 0:03:57.567 *********** 2025-06-02 18:00:32.668996 | orchestrator | 2025-06-02 18:00:32.669004 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2025-06-02 18:00:32.669018 | orchestrator | Monday 02 June 2025 17:59:39 +0000 (0:00:00.068) 0:03:57.636 *********** 2025-06-02 18:00:32.669026 | orchestrator | changed: [testbed-node-0] 2025-06-02 18:00:32.669034 | orchestrator | changed: [testbed-node-2] 2025-06-02 18:00:32.669042 | orchestrator | changed: [testbed-node-1] 2025-06-02 18:00:32.669050 | orchestrator | 2025-06-02 18:00:32.669061 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2025-06-02 18:00:32.669069 | orchestrator | Monday 02 June 2025 17:59:51 +0000 (0:00:12.339) 0:04:09.975 *********** 2025-06-02 18:00:32.669077 | orchestrator | changed: [testbed-node-2] 2025-06-02 18:00:32.669085 | orchestrator | changed: [testbed-node-0] 2025-06-02 18:00:32.669093 | orchestrator | changed: [testbed-node-1] 2025-06-02 18:00:32.669101 | orchestrator | 2025-06-02 18:00:32.669109 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2025-06-02 18:00:32.669117 | orchestrator | Monday 02 June 2025 18:00:03 +0000 (0:00:11.920) 0:04:21.896 *********** 2025-06-02 18:00:32.669151 | orchestrator | changed: [testbed-node-0] 2025-06-02 18:00:32.669166 | orchestrator | changed: [testbed-node-1] 2025-06-02 18:00:32.669175 | orchestrator | changed: [testbed-node-2] 2025-06-02 18:00:32.669183 | orchestrator | 2025-06-02 18:00:32.669191 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2025-06-02 18:00:32.669199 | orchestrator | Monday 02 June 2025 18:00:13 +0000 (0:00:10.230) 0:04:32.126 *********** 2025-06-02 18:00:32.669207 | orchestrator | changed: [testbed-node-1] 2025-06-02 18:00:32.669215 | orchestrator | changed: [testbed-node-0] 2025-06-02 18:00:32.669223 | orchestrator | changed: [testbed-node-2] 2025-06-02 18:00:32.669231 | orchestrator | 2025-06-02 18:00:32.669250 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2025-06-02 18:00:32.669258 | orchestrator | Monday 02 June 2025 18:00:24 +0000 (0:00:10.735) 0:04:42.862 *********** 2025-06-02 18:00:32.669275 | orchestrator | changed: [testbed-node-0] 2025-06-02 18:00:32.669283 | orchestrator | changed: [testbed-node-2] 2025-06-02 18:00:32.669291 | orchestrator | changed: [testbed-node-1] 2025-06-02 18:00:32.669299 | orchestrator | 2025-06-02 18:00:32.669307 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 18:00:32.669316 | orchestrator | testbed-node-0 : ok=57  changed=39  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-02 18:00:32.669331 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 18:00:32.669339 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 18:00:32.669347 | orchestrator | 2025-06-02 18:00:32.669355 | orchestrator | 2025-06-02 18:00:32.669363 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 18:00:32.669370 | orchestrator | Monday 02 June 2025 18:00:30 +0000 (0:00:05.861) 0:04:48.724 *********** 2025-06-02 18:00:32.669378 | orchestrator | =============================================================================== 2025-06-02 18:00:32.669386 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 21.08s 2025-06-02 18:00:32.669394 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.71s 2025-06-02 18:00:32.669402 | orchestrator | octavia : Add rules for security groups -------------------------------- 16.63s 2025-06-02 18:00:32.669410 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 16.48s 2025-06-02 18:00:32.669418 | orchestrator | octavia : Restart octavia-api container -------------------------------- 12.34s 2025-06-02 18:00:32.669427 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.92s 2025-06-02 18:00:32.669440 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 10.74s 2025-06-02 18:00:32.669453 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 10.23s 2025-06-02 18:00:32.669466 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.23s 2025-06-02 18:00:32.669479 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.33s 2025-06-02 18:00:32.669490 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.68s 2025-06-02 18:00:32.669501 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.96s 2025-06-02 18:00:32.669513 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.68s 2025-06-02 18:00:32.669526 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 6.07s 2025-06-02 18:00:32.669539 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 5.86s 2025-06-02 18:00:32.669551 | orchestrator | octavia : Update loadbalancer management subnet ------------------------- 5.63s 2025-06-02 18:00:32.669565 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.55s 2025-06-02 18:00:32.669578 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.41s 2025-06-02 18:00:32.669591 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.37s 2025-06-02 18:00:32.669605 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.33s 2025-06-02 18:00:32.669614 | orchestrator | 2025-06-02 18:00:32 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 18:00:35.695711 | orchestrator | 2025-06-02 18:00:35 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 18:00:38.737833 | orchestrator | 2025-06-02 18:00:38 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 18:00:41.783048 | orchestrator | 2025-06-02 18:00:41 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 18:00:44.821729 | orchestrator | 2025-06-02 18:00:44 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 18:00:47.863450 | orchestrator | 2025-06-02 18:00:47 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 18:00:50.902433 | orchestrator | 2025-06-02 18:00:50 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 18:00:53.943937 | orchestrator | 2025-06-02 18:00:53 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 18:00:56.989487 | orchestrator | 2025-06-02 18:00:56 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 18:01:00.033450 | orchestrator | 2025-06-02 18:01:00 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 18:01:03.072790 | orchestrator | 2025-06-02 18:01:03 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 18:01:06.111347 | orchestrator | 2025-06-02 18:01:06 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 18:01:09.158672 | orchestrator | 2025-06-02 18:01:09 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 18:01:12.201045 | orchestrator | 2025-06-02 18:01:12 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 18:01:15.245635 | orchestrator | 2025-06-02 18:01:15 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 18:01:18.287920 | orchestrator | 2025-06-02 18:01:18 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 18:01:21.326981 | orchestrator | 2025-06-02 18:01:21 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 18:01:24.393592 | orchestrator | 2025-06-02 18:01:24 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 18:01:27.435763 | orchestrator | 2025-06-02 18:01:27 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 18:01:30.485342 | orchestrator | 2025-06-02 18:01:30 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 18:01:33.535246 | orchestrator | 2025-06-02 18:01:33.816321 | orchestrator | 2025-06-02 18:01:33.820770 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Mon Jun 2 18:01:33 UTC 2025 2025-06-02 18:01:33.820841 | orchestrator | 2025-06-02 18:01:34.283348 | orchestrator | ok: Runtime: 0:34:58.833765 2025-06-02 18:01:34.550128 | 2025-06-02 18:01:34.550329 | TASK [Bootstrap services] 2025-06-02 18:01:35.319811 | orchestrator | 2025-06-02 18:01:35.319974 | orchestrator | # BOOTSTRAP 2025-06-02 18:01:35.319986 | orchestrator | 2025-06-02 18:01:35.319993 | orchestrator | + set -e 2025-06-02 18:01:35.320000 | orchestrator | + echo 2025-06-02 18:01:35.320007 | orchestrator | + echo '# BOOTSTRAP' 2025-06-02 18:01:35.320017 | orchestrator | + echo 2025-06-02 18:01:35.320046 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-06-02 18:01:35.328399 | orchestrator | + set -e 2025-06-02 18:01:35.328481 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-06-02 18:01:39.931716 | orchestrator | 2025-06-02 18:01:39 | INFO  | It takes a moment until task b12777ff-3f8b-4249-a0f8-0cc8033a85ec (flavor-manager) has been started and output is visible here. 2025-06-02 18:01:43.911508 | orchestrator | 2025-06-02 18:01:43 | INFO  | Flavor SCS-1V-4 created 2025-06-02 18:01:44.210785 | orchestrator | 2025-06-02 18:01:44 | INFO  | Flavor SCS-2V-8 created 2025-06-02 18:01:44.412716 | orchestrator | 2025-06-02 18:01:44 | INFO  | Flavor SCS-4V-16 created 2025-06-02 18:01:44.599640 | orchestrator | 2025-06-02 18:01:44 | INFO  | Flavor SCS-8V-32 created 2025-06-02 18:01:44.737669 | orchestrator | 2025-06-02 18:01:44 | INFO  | Flavor SCS-1V-2 created 2025-06-02 18:01:44.883205 | orchestrator | 2025-06-02 18:01:44 | INFO  | Flavor SCS-2V-4 created 2025-06-02 18:01:45.047345 | orchestrator | 2025-06-02 18:01:45 | INFO  | Flavor SCS-4V-8 created 2025-06-02 18:01:45.173885 | orchestrator | 2025-06-02 18:01:45 | INFO  | Flavor SCS-8V-16 created 2025-06-02 18:01:45.305018 | orchestrator | 2025-06-02 18:01:45 | INFO  | Flavor SCS-16V-32 created 2025-06-02 18:01:45.444463 | orchestrator | 2025-06-02 18:01:45 | INFO  | Flavor SCS-1V-8 created 2025-06-02 18:01:45.591367 | orchestrator | 2025-06-02 18:01:45 | INFO  | Flavor SCS-2V-16 created 2025-06-02 18:01:45.746201 | orchestrator | 2025-06-02 18:01:45 | INFO  | Flavor SCS-4V-32 created 2025-06-02 18:01:45.904841 | orchestrator | 2025-06-02 18:01:45 | INFO  | Flavor SCS-1L-1 created 2025-06-02 18:01:46.042827 | orchestrator | 2025-06-02 18:01:46 | INFO  | Flavor SCS-2V-4-20s created 2025-06-02 18:01:46.167855 | orchestrator | 2025-06-02 18:01:46 | INFO  | Flavor SCS-4V-16-100s created 2025-06-02 18:01:46.308622 | orchestrator | 2025-06-02 18:01:46 | INFO  | Flavor SCS-1V-4-10 created 2025-06-02 18:01:46.461176 | orchestrator | 2025-06-02 18:01:46 | INFO  | Flavor SCS-2V-8-20 created 2025-06-02 18:01:46.604168 | orchestrator | 2025-06-02 18:01:46 | INFO  | Flavor SCS-4V-16-50 created 2025-06-02 18:01:46.738300 | orchestrator | 2025-06-02 18:01:46 | INFO  | Flavor SCS-8V-32-100 created 2025-06-02 18:01:46.883591 | orchestrator | 2025-06-02 18:01:46 | INFO  | Flavor SCS-1V-2-5 created 2025-06-02 18:01:47.030218 | orchestrator | 2025-06-02 18:01:47 | INFO  | Flavor SCS-2V-4-10 created 2025-06-02 18:01:47.158522 | orchestrator | 2025-06-02 18:01:47 | INFO  | Flavor SCS-4V-8-20 created 2025-06-02 18:01:47.288612 | orchestrator | 2025-06-02 18:01:47 | INFO  | Flavor SCS-8V-16-50 created 2025-06-02 18:01:47.446490 | orchestrator | 2025-06-02 18:01:47 | INFO  | Flavor SCS-16V-32-100 created 2025-06-02 18:01:47.597887 | orchestrator | 2025-06-02 18:01:47 | INFO  | Flavor SCS-1V-8-20 created 2025-06-02 18:01:47.749697 | orchestrator | 2025-06-02 18:01:47 | INFO  | Flavor SCS-2V-16-50 created 2025-06-02 18:01:47.891467 | orchestrator | 2025-06-02 18:01:47 | INFO  | Flavor SCS-4V-32-100 created 2025-06-02 18:01:48.053551 | orchestrator | 2025-06-02 18:01:48 | INFO  | Flavor SCS-1L-1-5 created 2025-06-02 18:01:50.317511 | orchestrator | 2025-06-02 18:01:50 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-06-02 18:01:50.322720 | orchestrator | Registering Redlock._acquired_script 2025-06-02 18:01:50.322907 | orchestrator | Registering Redlock._extend_script 2025-06-02 18:01:50.322986 | orchestrator | Registering Redlock._release_script 2025-06-02 18:01:50.386305 | orchestrator | 2025-06-02 18:01:50 | INFO  | Task af6043d9-0979-4719-a54b-ee85c3d6518c (bootstrap-basic) was prepared for execution. 2025-06-02 18:01:50.386395 | orchestrator | 2025-06-02 18:01:50 | INFO  | It takes a moment until task af6043d9-0979-4719-a54b-ee85c3d6518c (bootstrap-basic) has been started and output is visible here. 2025-06-02 18:01:54.871298 | orchestrator | 2025-06-02 18:01:54.872165 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-06-02 18:01:54.872208 | orchestrator | 2025-06-02 18:01:54.875613 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 18:01:54.877925 | orchestrator | Monday 02 June 2025 18:01:54 +0000 (0:00:00.087) 0:00:00.087 *********** 2025-06-02 18:01:56.729947 | orchestrator | ok: [localhost] 2025-06-02 18:01:56.730217 | orchestrator | 2025-06-02 18:01:56.731037 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-06-02 18:01:56.731802 | orchestrator | Monday 02 June 2025 18:01:56 +0000 (0:00:01.863) 0:00:01.950 *********** 2025-06-02 18:02:05.866665 | orchestrator | ok: [localhost] 2025-06-02 18:02:05.866808 | orchestrator | 2025-06-02 18:02:05.867974 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-06-02 18:02:05.869790 | orchestrator | Monday 02 June 2025 18:02:05 +0000 (0:00:09.134) 0:00:11.085 *********** 2025-06-02 18:02:13.376438 | orchestrator | changed: [localhost] 2025-06-02 18:02:13.376825 | orchestrator | 2025-06-02 18:02:13.377664 | orchestrator | TASK [Get volume type local] *************************************************** 2025-06-02 18:02:13.380016 | orchestrator | Monday 02 June 2025 18:02:13 +0000 (0:00:07.508) 0:00:18.593 *********** 2025-06-02 18:02:20.644555 | orchestrator | ok: [localhost] 2025-06-02 18:02:20.644657 | orchestrator | 2025-06-02 18:02:20.644673 | orchestrator | TASK [Create volume type local] ************************************************ 2025-06-02 18:02:20.647389 | orchestrator | Monday 02 June 2025 18:02:20 +0000 (0:00:07.269) 0:00:25.863 *********** 2025-06-02 18:02:27.178664 | orchestrator | changed: [localhost] 2025-06-02 18:02:27.179835 | orchestrator | 2025-06-02 18:02:27.182147 | orchestrator | TASK [Create public network] *************************************************** 2025-06-02 18:02:27.184272 | orchestrator | Monday 02 June 2025 18:02:27 +0000 (0:00:06.533) 0:00:32.397 *********** 2025-06-02 18:02:32.294735 | orchestrator | changed: [localhost] 2025-06-02 18:02:32.294841 | orchestrator | 2025-06-02 18:02:32.295845 | orchestrator | TASK [Set public network to default] ******************************************* 2025-06-02 18:02:32.297101 | orchestrator | Monday 02 June 2025 18:02:32 +0000 (0:00:05.115) 0:00:37.513 *********** 2025-06-02 18:02:38.429656 | orchestrator | changed: [localhost] 2025-06-02 18:02:38.429830 | orchestrator | 2025-06-02 18:02:38.433082 | orchestrator | TASK [Create public subnet] **************************************************** 2025-06-02 18:02:38.433213 | orchestrator | Monday 02 June 2025 18:02:38 +0000 (0:00:06.133) 0:00:43.646 *********** 2025-06-02 18:02:42.938599 | orchestrator | changed: [localhost] 2025-06-02 18:02:42.939364 | orchestrator | 2025-06-02 18:02:42.939862 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-06-02 18:02:42.942210 | orchestrator | Monday 02 June 2025 18:02:42 +0000 (0:00:04.509) 0:00:48.156 *********** 2025-06-02 18:02:46.851193 | orchestrator | changed: [localhost] 2025-06-02 18:02:46.851804 | orchestrator | 2025-06-02 18:02:46.853160 | orchestrator | TASK [Create manager role] ***************************************************** 2025-06-02 18:02:46.855522 | orchestrator | Monday 02 June 2025 18:02:46 +0000 (0:00:03.912) 0:00:52.069 *********** 2025-06-02 18:02:50.462640 | orchestrator | ok: [localhost] 2025-06-02 18:02:50.462977 | orchestrator | 2025-06-02 18:02:50.463669 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 18:02:50.464234 | orchestrator | 2025-06-02 18:02:50 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 18:02:50.464251 | orchestrator | 2025-06-02 18:02:50 | INFO  | Please wait and do not abort execution. 2025-06-02 18:02:50.465255 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 18:02:50.467412 | orchestrator | 2025-06-02 18:02:50.468165 | orchestrator | 2025-06-02 18:02:50.468829 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 18:02:50.469354 | orchestrator | Monday 02 June 2025 18:02:50 +0000 (0:00:03.612) 0:00:55.682 *********** 2025-06-02 18:02:50.469610 | orchestrator | =============================================================================== 2025-06-02 18:02:50.470657 | orchestrator | Get volume type LUKS ---------------------------------------------------- 9.13s 2025-06-02 18:02:50.471090 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.51s 2025-06-02 18:02:50.471362 | orchestrator | Get volume type local --------------------------------------------------- 7.27s 2025-06-02 18:02:50.471756 | orchestrator | Create volume type local ------------------------------------------------ 6.53s 2025-06-02 18:02:50.472268 | orchestrator | Set public network to default ------------------------------------------- 6.13s 2025-06-02 18:02:50.472780 | orchestrator | Create public network --------------------------------------------------- 5.12s 2025-06-02 18:02:50.473158 | orchestrator | Create public subnet ---------------------------------------------------- 4.51s 2025-06-02 18:02:50.473788 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.91s 2025-06-02 18:02:50.475403 | orchestrator | Create manager role ----------------------------------------------------- 3.61s 2025-06-02 18:02:50.476438 | orchestrator | Gathering Facts --------------------------------------------------------- 1.86s 2025-06-02 18:02:52.885342 | orchestrator | 2025-06-02 18:02:52 | INFO  | It takes a moment until task 70a37f0b-18af-4f11-b52d-2f81e331f5b5 (image-manager) has been started and output is visible here. 2025-06-02 18:02:56.439569 | orchestrator | 2025-06-02 18:02:56 | INFO  | Processing image 'Cirros 0.6.2' 2025-06-02 18:02:56.653574 | orchestrator | 2025-06-02 18:02:56 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-06-02 18:02:56.654140 | orchestrator | 2025-06-02 18:02:56 | INFO  | Importing image Cirros 0.6.2 2025-06-02 18:02:56.655460 | orchestrator | 2025-06-02 18:02:56 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-06-02 18:02:58.428759 | orchestrator | 2025-06-02 18:02:58 | INFO  | Waiting for image to leave queued state... 2025-06-02 18:03:00.475874 | orchestrator | 2025-06-02 18:03:00 | INFO  | Waiting for import to complete... 2025-06-02 18:03:10.798497 | orchestrator | 2025-06-02 18:03:10 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-06-02 18:03:11.217505 | orchestrator | 2025-06-02 18:03:11 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-06-02 18:03:11.222768 | orchestrator | 2025-06-02 18:03:11 | INFO  | Setting internal_version = 0.6.2 2025-06-02 18:03:11.223815 | orchestrator | 2025-06-02 18:03:11 | INFO  | Setting image_original_user = cirros 2025-06-02 18:03:11.224123 | orchestrator | 2025-06-02 18:03:11 | INFO  | Adding tag os:cirros 2025-06-02 18:03:11.544607 | orchestrator | 2025-06-02 18:03:11 | INFO  | Setting property architecture: x86_64 2025-06-02 18:03:11.825522 | orchestrator | 2025-06-02 18:03:11 | INFO  | Setting property hw_disk_bus: scsi 2025-06-02 18:03:12.085620 | orchestrator | 2025-06-02 18:03:12 | INFO  | Setting property hw_rng_model: virtio 2025-06-02 18:03:12.320808 | orchestrator | 2025-06-02 18:03:12 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-06-02 18:03:12.536184 | orchestrator | 2025-06-02 18:03:12 | INFO  | Setting property hw_watchdog_action: reset 2025-06-02 18:03:12.755630 | orchestrator | 2025-06-02 18:03:12 | INFO  | Setting property hypervisor_type: qemu 2025-06-02 18:03:12.967424 | orchestrator | 2025-06-02 18:03:12 | INFO  | Setting property os_distro: cirros 2025-06-02 18:03:13.171593 | orchestrator | 2025-06-02 18:03:13 | INFO  | Setting property replace_frequency: never 2025-06-02 18:03:13.433312 | orchestrator | 2025-06-02 18:03:13 | INFO  | Setting property uuid_validity: none 2025-06-02 18:03:13.649749 | orchestrator | 2025-06-02 18:03:13 | INFO  | Setting property provided_until: none 2025-06-02 18:03:13.859091 | orchestrator | 2025-06-02 18:03:13 | INFO  | Setting property image_description: Cirros 2025-06-02 18:03:14.089795 | orchestrator | 2025-06-02 18:03:14 | INFO  | Setting property image_name: Cirros 2025-06-02 18:03:14.284612 | orchestrator | 2025-06-02 18:03:14 | INFO  | Setting property internal_version: 0.6.2 2025-06-02 18:03:14.520929 | orchestrator | 2025-06-02 18:03:14 | INFO  | Setting property image_original_user: cirros 2025-06-02 18:03:14.750427 | orchestrator | 2025-06-02 18:03:14 | INFO  | Setting property os_version: 0.6.2 2025-06-02 18:03:14.947938 | orchestrator | 2025-06-02 18:03:14 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-06-02 18:03:15.161905 | orchestrator | 2025-06-02 18:03:15 | INFO  | Setting property image_build_date: 2023-05-30 2025-06-02 18:03:15.382858 | orchestrator | 2025-06-02 18:03:15 | INFO  | Checking status of 'Cirros 0.6.2' 2025-06-02 18:03:15.383364 | orchestrator | 2025-06-02 18:03:15 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-06-02 18:03:15.384082 | orchestrator | 2025-06-02 18:03:15 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-06-02 18:03:15.598562 | orchestrator | 2025-06-02 18:03:15 | INFO  | Processing image 'Cirros 0.6.3' 2025-06-02 18:03:15.796497 | orchestrator | 2025-06-02 18:03:15 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-06-02 18:03:15.796618 | orchestrator | 2025-06-02 18:03:15 | INFO  | Importing image Cirros 0.6.3 2025-06-02 18:03:15.797315 | orchestrator | 2025-06-02 18:03:15 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-06-02 18:03:16.994947 | orchestrator | 2025-06-02 18:03:16 | INFO  | Waiting for image to leave queued state... 2025-06-02 18:03:19.051520 | orchestrator | 2025-06-02 18:03:19 | INFO  | Waiting for import to complete... 2025-06-02 18:03:29.404280 | orchestrator | 2025-06-02 18:03:29 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-06-02 18:03:29.690473 | orchestrator | 2025-06-02 18:03:29 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-06-02 18:03:29.691886 | orchestrator | 2025-06-02 18:03:29 | INFO  | Setting internal_version = 0.6.3 2025-06-02 18:03:29.692569 | orchestrator | 2025-06-02 18:03:29 | INFO  | Setting image_original_user = cirros 2025-06-02 18:03:29.693202 | orchestrator | 2025-06-02 18:03:29 | INFO  | Adding tag os:cirros 2025-06-02 18:03:29.930387 | orchestrator | 2025-06-02 18:03:29 | INFO  | Setting property architecture: x86_64 2025-06-02 18:03:30.147617 | orchestrator | 2025-06-02 18:03:30 | INFO  | Setting property hw_disk_bus: scsi 2025-06-02 18:03:30.465181 | orchestrator | 2025-06-02 18:03:30 | INFO  | Setting property hw_rng_model: virtio 2025-06-02 18:03:30.689794 | orchestrator | 2025-06-02 18:03:30 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-06-02 18:03:30.881776 | orchestrator | 2025-06-02 18:03:30 | INFO  | Setting property hw_watchdog_action: reset 2025-06-02 18:03:31.090311 | orchestrator | 2025-06-02 18:03:31 | INFO  | Setting property hypervisor_type: qemu 2025-06-02 18:03:31.321723 | orchestrator | 2025-06-02 18:03:31 | INFO  | Setting property os_distro: cirros 2025-06-02 18:03:31.538488 | orchestrator | 2025-06-02 18:03:31 | INFO  | Setting property replace_frequency: never 2025-06-02 18:03:31.770169 | orchestrator | 2025-06-02 18:03:31 | INFO  | Setting property uuid_validity: none 2025-06-02 18:03:31.996295 | orchestrator | 2025-06-02 18:03:31 | INFO  | Setting property provided_until: none 2025-06-02 18:03:32.201709 | orchestrator | 2025-06-02 18:03:32 | INFO  | Setting property image_description: Cirros 2025-06-02 18:03:32.446289 | orchestrator | 2025-06-02 18:03:32 | INFO  | Setting property image_name: Cirros 2025-06-02 18:03:32.684494 | orchestrator | 2025-06-02 18:03:32 | INFO  | Setting property internal_version: 0.6.3 2025-06-02 18:03:32.888440 | orchestrator | 2025-06-02 18:03:32 | INFO  | Setting property image_original_user: cirros 2025-06-02 18:03:33.105311 | orchestrator | 2025-06-02 18:03:33 | INFO  | Setting property os_version: 0.6.3 2025-06-02 18:03:33.553812 | orchestrator | 2025-06-02 18:03:33 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-06-02 18:03:33.742433 | orchestrator | 2025-06-02 18:03:33 | INFO  | Setting property image_build_date: 2024-09-26 2025-06-02 18:03:34.009674 | orchestrator | 2025-06-02 18:03:34 | INFO  | Checking status of 'Cirros 0.6.3' 2025-06-02 18:03:34.010135 | orchestrator | 2025-06-02 18:03:34 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-06-02 18:03:34.011490 | orchestrator | 2025-06-02 18:03:34 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-06-02 18:03:35.194222 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-06-02 18:03:37.131235 | orchestrator | 2025-06-02 18:03:37 | INFO  | date: 2025-06-02 2025-06-02 18:03:37.131349 | orchestrator | 2025-06-02 18:03:37 | INFO  | image: octavia-amphora-haproxy-2024.2.20250602.qcow2 2025-06-02 18:03:37.131370 | orchestrator | 2025-06-02 18:03:37 | INFO  | url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250602.qcow2 2025-06-02 18:03:37.131480 | orchestrator | 2025-06-02 18:03:37 | INFO  | checksum_url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250602.qcow2.CHECKSUM 2025-06-02 18:03:37.159507 | orchestrator | 2025-06-02 18:03:37 | INFO  | checksum: 4244ae669e0302e4de8dd880cdee4c27c232e9d393dd18f3521b5d0e7c284b7c 2025-06-02 18:03:37.239151 | orchestrator | 2025-06-02 18:03:37 | INFO  | It takes a moment until task 93947d50-6255-4a9e-bdf2-68027d2a7842 (image-manager) has been started and output is visible here. 2025-06-02 18:03:37.482361 | orchestrator | /usr/local/lib/python3.13/site-packages/openstack_image_manager/__init__.py:5: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-06-02 18:03:37.483178 | orchestrator | from pkg_resources import get_distribution, DistributionNotFound 2025-06-02 18:03:39.128099 | orchestrator | 2025-06-02 18:03:39 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-06-02' 2025-06-02 18:03:39.148335 | orchestrator | 2025-06-02 18:03:39 | INFO  | Tested URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250602.qcow2: 200 2025-06-02 18:03:39.149042 | orchestrator | 2025-06-02 18:03:39 | INFO  | Importing image OpenStack Octavia Amphora 2025-06-02 2025-06-02 18:03:39.149522 | orchestrator | 2025-06-02 18:03:39 | INFO  | Importing from URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250602.qcow2 2025-06-02 18:03:39.568874 | orchestrator | 2025-06-02 18:03:39 | INFO  | Waiting for image to leave queued state... 2025-06-02 18:03:41.611683 | orchestrator | 2025-06-02 18:03:41 | INFO  | Waiting for import to complete... 2025-06-02 18:03:51.718717 | orchestrator | 2025-06-02 18:03:51 | INFO  | Waiting for import to complete... 2025-06-02 18:04:01.825045 | orchestrator | 2025-06-02 18:04:01 | INFO  | Waiting for import to complete... 2025-06-02 18:04:11.920067 | orchestrator | 2025-06-02 18:04:11 | INFO  | Waiting for import to complete... 2025-06-02 18:04:22.008402 | orchestrator | 2025-06-02 18:04:22 | INFO  | Waiting for import to complete... 2025-06-02 18:04:32.135882 | orchestrator | 2025-06-02 18:04:32 | INFO  | Import of 'OpenStack Octavia Amphora 2025-06-02' successfully completed, reloading images 2025-06-02 18:04:32.538114 | orchestrator | 2025-06-02 18:04:32 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-06-02' 2025-06-02 18:04:32.539163 | orchestrator | 2025-06-02 18:04:32 | INFO  | Setting internal_version = 2025-06-02 2025-06-02 18:04:32.539263 | orchestrator | 2025-06-02 18:04:32 | INFO  | Setting image_original_user = ubuntu 2025-06-02 18:04:32.540024 | orchestrator | 2025-06-02 18:04:32 | INFO  | Adding tag amphora 2025-06-02 18:04:32.776632 | orchestrator | 2025-06-02 18:04:32 | INFO  | Adding tag os:ubuntu 2025-06-02 18:04:33.030643 | orchestrator | 2025-06-02 18:04:33 | INFO  | Setting property architecture: x86_64 2025-06-02 18:04:33.218167 | orchestrator | 2025-06-02 18:04:33 | INFO  | Setting property hw_disk_bus: scsi 2025-06-02 18:04:33.414605 | orchestrator | 2025-06-02 18:04:33 | INFO  | Setting property hw_rng_model: virtio 2025-06-02 18:04:33.635430 | orchestrator | 2025-06-02 18:04:33 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-06-02 18:04:33.834818 | orchestrator | 2025-06-02 18:04:33 | INFO  | Setting property hw_watchdog_action: reset 2025-06-02 18:04:34.055059 | orchestrator | 2025-06-02 18:04:34 | INFO  | Setting property hypervisor_type: qemu 2025-06-02 18:04:34.304745 | orchestrator | 2025-06-02 18:04:34 | INFO  | Setting property os_distro: ubuntu 2025-06-02 18:04:34.509608 | orchestrator | 2025-06-02 18:04:34 | INFO  | Setting property replace_frequency: quarterly 2025-06-02 18:04:34.731341 | orchestrator | 2025-06-02 18:04:34 | INFO  | Setting property uuid_validity: last-1 2025-06-02 18:04:34.970309 | orchestrator | 2025-06-02 18:04:34 | INFO  | Setting property provided_until: none 2025-06-02 18:04:35.175798 | orchestrator | 2025-06-02 18:04:35 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-06-02 18:04:35.369366 | orchestrator | 2025-06-02 18:04:35 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-06-02 18:04:35.610275 | orchestrator | 2025-06-02 18:04:35 | INFO  | Setting property internal_version: 2025-06-02 2025-06-02 18:04:35.805029 | orchestrator | 2025-06-02 18:04:35 | INFO  | Setting property image_original_user: ubuntu 2025-06-02 18:04:36.049598 | orchestrator | 2025-06-02 18:04:36 | INFO  | Setting property os_version: 2025-06-02 2025-06-02 18:04:36.269084 | orchestrator | 2025-06-02 18:04:36 | INFO  | Setting property image_source: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250602.qcow2 2025-06-02 18:04:36.497857 | orchestrator | 2025-06-02 18:04:36 | INFO  | Setting property image_build_date: 2025-06-02 2025-06-02 18:04:36.716211 | orchestrator | 2025-06-02 18:04:36 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-06-02' 2025-06-02 18:04:36.721220 | orchestrator | 2025-06-02 18:04:36 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-06-02' 2025-06-02 18:04:36.912654 | orchestrator | 2025-06-02 18:04:36 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-06-02 18:04:36.913960 | orchestrator | 2025-06-02 18:04:36 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-06-02 18:04:36.915162 | orchestrator | 2025-06-02 18:04:36 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-06-02 18:04:36.917156 | orchestrator | 2025-06-02 18:04:36 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-06-02 18:04:37.728696 | orchestrator | ok: Runtime: 0:03:02.474389 2025-06-02 18:04:37.752565 | 2025-06-02 18:04:37.752738 | TASK [Run checks] 2025-06-02 18:04:38.484676 | orchestrator | + set -e 2025-06-02 18:04:38.484869 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-02 18:04:38.484890 | orchestrator | ++ export INTERACTIVE=false 2025-06-02 18:04:38.484940 | orchestrator | ++ INTERACTIVE=false 2025-06-02 18:04:38.484955 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-02 18:04:38.484968 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-02 18:04:38.484983 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-06-02 18:04:38.486099 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-06-02 18:04:38.493308 | orchestrator | 2025-06-02 18:04:38.493408 | orchestrator | # CHECK 2025-06-02 18:04:38.493424 | orchestrator | 2025-06-02 18:04:38.493436 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-02 18:04:38.493454 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-02 18:04:38.493525 | orchestrator | + echo 2025-06-02 18:04:38.493538 | orchestrator | + echo '# CHECK' 2025-06-02 18:04:38.493549 | orchestrator | + echo 2025-06-02 18:04:38.493565 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-02 18:04:38.494238 | orchestrator | ++ semver 9.1.0 5.0.0 2025-06-02 18:04:38.559216 | orchestrator | 2025-06-02 18:04:38.559309 | orchestrator | ## Containers @ testbed-manager 2025-06-02 18:04:38.559320 | orchestrator | 2025-06-02 18:04:38.559330 | orchestrator | + [[ 1 -eq -1 ]] 2025-06-02 18:04:38.559339 | orchestrator | + echo 2025-06-02 18:04:38.559347 | orchestrator | + echo '## Containers @ testbed-manager' 2025-06-02 18:04:38.559354 | orchestrator | + echo 2025-06-02 18:04:38.559361 | orchestrator | + osism container testbed-manager ps 2025-06-02 18:04:40.744871 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-02 18:04:40.745051 | orchestrator | 5afea9246dab registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_blackbox_exporter 2025-06-02 18:04:40.745073 | orchestrator | 1960a4c992a1 registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_alertmanager 2025-06-02 18:04:40.745090 | orchestrator | 3111e65bce43 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-06-02 18:04:40.745100 | orchestrator | 851fd95c60c4 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2025-06-02 18:04:40.745109 | orchestrator | 068d6f035cb2 registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_server 2025-06-02 18:04:40.745119 | orchestrator | 7ba343d3cc0b registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 17 minutes ago Up 17 minutes cephclient 2025-06-02 18:04:40.745133 | orchestrator | 9b8dc3237231 registry.osism.tech/kolla/release/cron:3.0.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2025-06-02 18:04:40.745143 | orchestrator | 2ebac1228207 registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2025-06-02 18:04:40.745152 | orchestrator | e578592103a7 registry.osism.tech/kolla/release/fluentd:5.0.7.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes fluentd 2025-06-02 18:04:40.745186 | orchestrator | 49cdef0e64fc phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 30 minutes ago Up 30 minutes (healthy) 80/tcp phpmyadmin 2025-06-02 18:04:40.745196 | orchestrator | d187dc92687c registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 31 minutes ago Up 31 minutes openstackclient 2025-06-02 18:04:40.745205 | orchestrator | b02183621cde registry.osism.tech/osism/homer:v25.05.2 "/bin/sh /entrypoint…" 31 minutes ago Up 31 minutes (healthy) 8080/tcp homer 2025-06-02 18:04:40.745215 | orchestrator | f452b4f5abec registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 52 minutes ago Up 52 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2025-06-02 18:04:40.745229 | orchestrator | 96d2dfb89416 registry.osism.tech/osism/inventory-reconciler:0.20250530.0 "/sbin/tini -- /entr…" 56 minutes ago Up 38 minutes (healthy) manager-inventory_reconciler-1 2025-06-02 18:04:40.745258 | orchestrator | 617be65bc27f registry.osism.tech/osism/ceph-ansible:0.20250530.0 "/entrypoint.sh osis…" 56 minutes ago Up 38 minutes (healthy) ceph-ansible 2025-06-02 18:04:40.745267 | orchestrator | 51a1d225b2eb registry.osism.tech/osism/osism-ansible:0.20250531.0 "/entrypoint.sh osis…" 56 minutes ago Up 38 minutes (healthy) osism-ansible 2025-06-02 18:04:40.745277 | orchestrator | 9298f9c56c3f registry.osism.tech/osism/osism-kubernetes:0.20250530.0 "/entrypoint.sh osis…" 56 minutes ago Up 38 minutes (healthy) osism-kubernetes 2025-06-02 18:04:40.745286 | orchestrator | 47e09e34b746 registry.osism.tech/osism/kolla-ansible:0.20250530.0 "/entrypoint.sh osis…" 56 minutes ago Up 38 minutes (healthy) kolla-ansible 2025-06-02 18:04:40.745295 | orchestrator | c68e5a6a46ac registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" 56 minutes ago Up 39 minutes (healthy) 8000/tcp manager-ara-server-1 2025-06-02 18:04:40.745304 | orchestrator | 40fc07385129 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 56 minutes ago Up 39 minutes (healthy) manager-watchdog-1 2025-06-02 18:04:40.745314 | orchestrator | d4e11b1977af registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 56 minutes ago Up 39 minutes (healthy) manager-flower-1 2025-06-02 18:04:40.745323 | orchestrator | f339b1e51042 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 56 minutes ago Up 39 minutes (healthy) manager-listener-1 2025-06-02 18:04:40.745332 | orchestrator | 9b5941cd07cc registry.osism.tech/dockerhub/library/mariadb:11.7.2 "docker-entrypoint.s…" 56 minutes ago Up 39 minutes (healthy) 3306/tcp manager-mariadb-1 2025-06-02 18:04:40.745348 | orchestrator | 89815881f7e8 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 56 minutes ago Up 39 minutes (healthy) manager-openstack-1 2025-06-02 18:04:40.745357 | orchestrator | caf0540b537e registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 56 minutes ago Up 39 minutes (healthy) manager-beat-1 2025-06-02 18:04:40.745367 | orchestrator | 5d35257a539d registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" 56 minutes ago Up 39 minutes (healthy) 6379/tcp manager-redis-1 2025-06-02 18:04:40.745376 | orchestrator | 4144b5b93d0b registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 56 minutes ago Up 39 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2025-06-02 18:04:40.745385 | orchestrator | e0b209c79d23 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- sleep…" 56 minutes ago Up 39 minutes (healthy) osismclient 2025-06-02 18:04:40.745394 | orchestrator | ffdd912e127d registry.osism.tech/dockerhub/library/traefik:v3.4.1 "/entrypoint.sh trae…" 58 minutes ago Up 58 minutes (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-06-02 18:04:41.031346 | orchestrator | 2025-06-02 18:04:41.031450 | orchestrator | ## Images @ testbed-manager 2025-06-02 18:04:41.031463 | orchestrator | 2025-06-02 18:04:41.031471 | orchestrator | + echo 2025-06-02 18:04:41.031479 | orchestrator | + echo '## Images @ testbed-manager' 2025-06-02 18:04:41.031488 | orchestrator | + echo 2025-06-02 18:04:41.031495 | orchestrator | + osism container testbed-manager images 2025-06-02 18:04:43.059403 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-02 18:04:43.060789 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20250530.0 f5f0b51afbcc 5 hours ago 574MB 2025-06-02 18:04:43.060844 | orchestrator | registry.osism.tech/osism/homer v25.05.2 e73e0506845d 15 hours ago 11.5MB 2025-06-02 18:04:43.060878 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 86ee4afc8387 15 hours ago 225MB 2025-06-02 18:04:43.060890 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20250531.0 eb6fb0ff8e52 45 hours ago 578MB 2025-06-02 18:04:43.060932 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250530 fc4477504c4f 2 days ago 319MB 2025-06-02 18:04:43.060944 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.4.1.20250530 33529d2e8ea7 2 days ago 747MB 2025-06-02 18:04:43.060955 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250530 a0c9ae28d2e7 2 days ago 629MB 2025-06-02 18:04:43.060966 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20250530 48bb7d2c6b08 2 days ago 892MB 2025-06-02 18:04:43.060977 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20250530 3d4c4d6fe7fa 2 days ago 361MB 2025-06-02 18:04:43.060988 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250530 b51a156bac81 2 days ago 411MB 2025-06-02 18:04:43.060999 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250530 a076e6a80bbc 2 days ago 359MB 2025-06-02 18:04:43.061032 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20250530 0e447338580d 2 days ago 457MB 2025-06-02 18:04:43.061044 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20250530.0 bce894afc91f 2 days ago 538MB 2025-06-02 18:04:43.061055 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20250530.0 467731c31786 2 days ago 1.21GB 2025-06-02 18:04:43.061066 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20250530.0 1b4e0cdc5cdd 2 days ago 308MB 2025-06-02 18:04:43.061077 | orchestrator | registry.osism.tech/osism/osism 0.20250530.0 bce098659f68 3 days ago 297MB 2025-06-02 18:04:43.061091 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.4-alpine 7ff232a1fe04 4 days ago 41.4MB 2025-06-02 18:04:43.061110 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.4.1 ff0a241c8a0a 6 days ago 224MB 2025-06-02 18:04:43.061127 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 3 weeks ago 453MB 2025-06-02 18:04:43.061145 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.7.2 4815a3e162ea 3 months ago 328MB 2025-06-02 18:04:43.061163 | orchestrator | phpmyadmin/phpmyadmin 5.2 0276a66ce322 4 months ago 571MB 2025-06-02 18:04:43.061197 | orchestrator | registry.osism.tech/osism/ara-server 1.7.2 bb44122eb176 9 months ago 300MB 2025-06-02 18:04:43.061231 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 11 months ago 146MB 2025-06-02 18:04:43.336446 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-02 18:04:43.337355 | orchestrator | ++ semver 9.1.0 5.0.0 2025-06-02 18:04:43.395613 | orchestrator | 2025-06-02 18:04:43.395738 | orchestrator | ## Containers @ testbed-node-0 2025-06-02 18:04:43.395757 | orchestrator | 2025-06-02 18:04:43.395770 | orchestrator | + [[ 1 -eq -1 ]] 2025-06-02 18:04:43.395781 | orchestrator | + echo 2025-06-02 18:04:43.395794 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-06-02 18:04:43.395822 | orchestrator | + echo 2025-06-02 18:04:43.395844 | orchestrator | + osism container testbed-node-0 ps 2025-06-02 18:04:45.583417 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-02 18:04:45.583530 | orchestrator | 05c9bc34623d registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-06-02 18:04:45.583543 | orchestrator | c5e820e149fc registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-06-02 18:04:45.583552 | orchestrator | f0f9b51f4dbc registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-06-02 18:04:45.583559 | orchestrator | 2267d2e39829 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-06-02 18:04:45.583567 | orchestrator | f66439b65598 registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-06-02 18:04:45.583593 | orchestrator | 82cafb5f8ad6 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-06-02 18:04:45.583601 | orchestrator | ca1456e2aa15 registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2025-06-02 18:04:45.583631 | orchestrator | e21627f1f939 registry.osism.tech/kolla/release/grafana:12.0.1.20250530 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-06-02 18:04:45.583650 | orchestrator | 0ee95e19b9d8 registry.osism.tech/kolla/release/placement-api:12.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-06-02 18:04:45.583663 | orchestrator | b73280c3bf64 registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-06-02 18:04:45.583675 | orchestrator | 4291ea4a7bc3 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-06-02 18:04:45.583687 | orchestrator | 085492b16dad registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) neutron_server 2025-06-02 18:04:45.583698 | orchestrator | 02c80fa39ff1 registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2025-06-02 18:04:45.583710 | orchestrator | 7cc47b29bd90 registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_novncproxy 2025-06-02 18:04:45.583722 | orchestrator | 308e4a3603f8 registry.osism.tech/kolla/release/designate-central:19.0.1.20250530 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2025-06-02 18:04:45.583734 | orchestrator | ae0d17ca3deb registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) nova_conductor 2025-06-02 18:04:45.583746 | orchestrator | 0e2eb0d17b4b registry.osism.tech/kolla/release/designate-api:19.0.1.20250530 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2025-06-02 18:04:45.583757 | orchestrator | e48e4b02fd85 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-06-02 18:04:45.583770 | orchestrator | 0e5768f40a86 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2025-06-02 18:04:45.583803 | orchestrator | b3af7517b64c registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2025-06-02 18:04:45.583812 | orchestrator | c22d1bb075d2 registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2025-06-02 18:04:45.583820 | orchestrator | 283452d65de4 registry.osism.tech/kolla/release/nova-api:30.0.1.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) nova_api 2025-06-02 18:04:45.583827 | orchestrator | aec9fea4cdef registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530 "dumb-init --single-…" 12 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-06-02 18:04:45.583834 | orchestrator | 2cd889b98e58 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2025-06-02 18:04:45.583849 | orchestrator | b8f0866569f2 registry.osism.tech/kolla/release/glance-api:29.0.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2025-06-02 18:04:45.583857 | orchestrator | 9e3c2d829be7 registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_scheduler 2025-06-02 18:04:45.583872 | orchestrator | 84a992cfe42a registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-06-02 18:04:45.583880 | orchestrator | 29a261d38c48 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2025-06-02 18:04:45.583887 | orchestrator | e6364aa3b124 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2025-06-02 18:04:45.583930 | orchestrator | e233fbd29f71 registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2025-06-02 18:04:45.583938 | orchestrator | 12b56c7930ac registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2025-06-02 18:04:45.583946 | orchestrator | cca64538a3d2 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-0 2025-06-02 18:04:45.583953 | orchestrator | 4773f6864823 registry.osism.tech/kolla/release/keystone:26.0.1.20250530 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2025-06-02 18:04:45.583960 | orchestrator | 04e4cd4107d6 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2025-06-02 18:04:45.583968 | orchestrator | a45005d5eff9 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2025-06-02 18:04:45.583975 | orchestrator | dcb5869938e7 registry.osism.tech/kolla/release/horizon:25.1.1.20250530 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2025-06-02 18:04:45.583983 | orchestrator | 175a59d03e2f registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530 "dumb-init -- kolla_…" 19 minutes ago Up 19 minutes (healthy) mariadb 2025-06-02 18:04:45.583990 | orchestrator | dbc7e19df321 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2025-06-02 18:04:45.583997 | orchestrator | 22d1a697c900 registry.osism.tech/kolla/release/opensearch:2.19.2.20250530 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2025-06-02 18:04:45.584004 | orchestrator | 8961ce690c9c registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-0 2025-06-02 18:04:45.584019 | orchestrator | a67771c30725 registry.osism.tech/kolla/release/keepalived:2.2.7.20250530 "dumb-init --single-…" 23 minutes ago Up 23 minutes keepalived 2025-06-02 18:04:45.584027 | orchestrator | 06022ef93366 registry.osism.tech/kolla/release/proxysql:2.7.3.20250530 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2025-06-02 18:04:45.584034 | orchestrator | c0c14f8b6630 registry.osism.tech/kolla/release/haproxy:2.6.12.20250530 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) haproxy 2025-06-02 18:04:45.584041 | orchestrator | f002c65b790a registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_northd 2025-06-02 18:04:45.584054 | orchestrator | 63fc19620c8e registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_sb_db 2025-06-02 18:04:45.584062 | orchestrator | f52894053d04 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_nb_db 2025-06-02 18:04:45.584070 | orchestrator | 14ac77ff05cd registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_controller 2025-06-02 18:04:45.584077 | orchestrator | c0b1b9c73486 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 27 minutes ago Up 27 minutes ceph-mon-testbed-node-0 2025-06-02 18:04:45.584088 | orchestrator | c2e7d7cf6004 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2025-06-02 18:04:45.584096 | orchestrator | d9b9143f079d registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_vswitchd 2025-06-02 18:04:45.584103 | orchestrator | ee4cc55039d4 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530 "dumb-init --single-…" 29 minutes ago Up 28 minutes (healthy) openvswitch_db 2025-06-02 18:04:45.584111 | orchestrator | 854829b83a27 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis_sentinel 2025-06-02 18:04:45.584118 | orchestrator | a45bc614df60 registry.osism.tech/kolla/release/redis:7.0.15.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis 2025-06-02 18:04:45.584125 | orchestrator | f5811bef5979 registry.osism.tech/kolla/release/memcached:1.6.18.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) memcached 2025-06-02 18:04:45.584132 | orchestrator | 6a851768e130 registry.osism.tech/kolla/release/cron:3.0.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2025-06-02 18:04:45.584140 | orchestrator | 5a280bcb6067 registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-06-02 18:04:45.584147 | orchestrator | e1e7a7af50ba registry.osism.tech/kolla/release/fluentd:5.0.7.20250530 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2025-06-02 18:04:45.850586 | orchestrator | 2025-06-02 18:04:45.850693 | orchestrator | ## Images @ testbed-node-0 2025-06-02 18:04:45.850709 | orchestrator | 2025-06-02 18:04:45.850719 | orchestrator | + echo 2025-06-02 18:04:45.850730 | orchestrator | + echo '## Images @ testbed-node-0' 2025-06-02 18:04:45.850741 | orchestrator | + echo 2025-06-02 18:04:45.850751 | orchestrator | + osism container testbed-node-0 images 2025-06-02 18:04:47.993958 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-02 18:04:47.994174 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250530 174e220ad7bd 2 days ago 319MB 2025-06-02 18:04:47.994192 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250530 fc4477504c4f 2 days ago 319MB 2025-06-02 18:04:47.994209 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250530 e984e28a57b0 2 days ago 330MB 2025-06-02 18:04:47.994233 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250530 4cfdb500286b 2 days ago 1.59GB 2025-06-02 18:04:47.994295 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250530 6fcb2e3a907b 2 days ago 1.55GB 2025-06-02 18:04:47.994314 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250530 a15c96a3369b 2 days ago 419MB 2025-06-02 18:04:47.994330 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.4.1.20250530 33529d2e8ea7 2 days ago 747MB 2025-06-02 18:04:47.994360 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250530 e5b003449f46 2 days ago 327MB 2025-06-02 18:04:47.994378 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250530 6b32f249a415 2 days ago 376MB 2025-06-02 18:04:47.994395 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250530 a0c9ae28d2e7 2 days ago 629MB 2025-06-02 18:04:47.994413 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.1.20250530 a3fa8a6a4c8c 2 days ago 1.01GB 2025-06-02 18:04:47.994433 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250530 5a4e6980c376 2 days ago 591MB 2025-06-02 18:04:47.994455 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250530 acd5d7cf8545 2 days ago 354MB 2025-06-02 18:04:47.994474 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250530 528199032acc 2 days ago 352MB 2025-06-02 18:04:47.994493 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250530 b51a156bac81 2 days ago 411MB 2025-06-02 18:04:47.994509 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250530 1ba9b68ab0fa 2 days ago 345MB 2025-06-02 18:04:47.994523 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250530 a076e6a80bbc 2 days ago 359MB 2025-06-02 18:04:47.994537 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250530 854fb3fbb8d1 2 days ago 326MB 2025-06-02 18:04:47.994550 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250530 4439f43e0847 2 days ago 325MB 2025-06-02 18:04:47.994562 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250530 81218760d1ef 2 days ago 1.21GB 2025-06-02 18:04:47.994575 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250530 8775c34ea5d6 2 days ago 362MB 2025-06-02 18:04:47.994587 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250530 ebe56e768165 2 days ago 362MB 2025-06-02 18:04:47.994600 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250530 9ac54d9b8655 2 days ago 1.15GB 2025-06-02 18:04:47.994613 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250530 95e52651071a 2 days ago 1.04GB 2025-06-02 18:04:47.994626 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.1.1.20250530 47338d40fcbf 2 days ago 1.25GB 2025-06-02 18:04:47.994639 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20250530 ec3349a6437e 2 days ago 1.04GB 2025-06-02 18:04:47.994651 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20250530 726d5cfde6f9 2 days ago 1.04GB 2025-06-02 18:04:47.994667 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20250530 c2f966fc60ed 2 days ago 1.04GB 2025-06-02 18:04:47.994686 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20250530 7c85bdb64788 2 days ago 1.04GB 2025-06-02 18:04:47.994704 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250530 ecd3067dd808 2 days ago 1.2GB 2025-06-02 18:04:47.994722 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250530 95661613cfe8 2 days ago 1.31GB 2025-06-02 18:04:47.994785 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.1.20250530 41afac8ed4ba 2 days ago 1.12GB 2025-06-02 18:04:47.994811 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.1.20250530 816eaef08c5c 2 days ago 1.12GB 2025-06-02 18:04:47.994830 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.1.20250530 81c4f823534a 2 days ago 1.1GB 2025-06-02 18:04:47.994956 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.1.20250530 437ecd9dcceb 2 days ago 1.1GB 2025-06-02 18:04:47.994969 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.1.20250530 fd10912df5f8 2 days ago 1.1GB 2025-06-02 18:04:47.994979 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.1.1.20250530 8e97f769e43d 2 days ago 1.41GB 2025-06-02 18:04:47.994990 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.1.1.20250530 1a292444fc87 2 days ago 1.41GB 2025-06-02 18:04:47.995001 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250530 9186d487d48c 2 days ago 1.06GB 2025-06-02 18:04:47.995011 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250530 14234b919f18 2 days ago 1.06GB 2025-06-02 18:04:47.995022 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250530 57148ade6082 2 days ago 1.05GB 2025-06-02 18:04:47.995033 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250530 6d21806eb92e 2 days ago 1.05GB 2025-06-02 18:04:47.995044 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250530 d5f39127ee53 2 days ago 1.05GB 2025-06-02 18:04:47.995055 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250530 68be509d15c9 2 days ago 1.05GB 2025-06-02 18:04:47.995066 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.0.20250530 aa9066568160 2 days ago 1.04GB 2025-06-02 18:04:47.995076 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.0.20250530 546dea2f2472 2 days ago 1.04GB 2025-06-02 18:04:47.995087 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250530 47425e7b5ce1 2 days ago 1.3GB 2025-06-02 18:04:47.995098 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250530 9fd4859cd2ca 2 days ago 1.29GB 2025-06-02 18:04:47.995108 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250530 65e1e2f12329 2 days ago 1.42GB 2025-06-02 18:04:47.995119 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250530 ded754c3e240 2 days ago 1.29GB 2025-06-02 18:04:47.995130 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250530 dc06d9c53ec5 2 days ago 1.06GB 2025-06-02 18:04:47.995141 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250530 450ccd1a2872 2 days ago 1.06GB 2025-06-02 18:04:47.995151 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250530 2f34913753bd 2 days ago 1.06GB 2025-06-02 18:04:47.995162 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250530 fe53c77abc4a 2 days ago 1.11GB 2025-06-02 18:04:47.995173 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250530 0419c85d82ab 2 days ago 1.13GB 2025-06-02 18:04:47.995184 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250530 7eb5295204d1 2 days ago 1.11GB 2025-06-02 18:04:47.995194 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20250530 df0a04869ff0 2 days ago 1.11GB 2025-06-02 18:04:47.995215 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20250530 e1b2b0cc8e5c 2 days ago 1.12GB 2025-06-02 18:04:47.995227 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250530 6a22761bd4f3 2 days ago 947MB 2025-06-02 18:04:47.995237 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250530 63ebc77afae1 2 days ago 947MB 2025-06-02 18:04:47.995254 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250530 694606382374 2 days ago 948MB 2025-06-02 18:04:47.995266 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250530 5b8b94e53819 2 days ago 948MB 2025-06-02 18:04:47.995277 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 3 weeks ago 1.27GB 2025-06-02 18:04:48.256143 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-02 18:04:48.273615 | orchestrator | ++ semver 9.1.0 5.0.0 2025-06-02 18:04:48.317043 | orchestrator | 2025-06-02 18:04:48.317148 | orchestrator | ## Containers @ testbed-node-1 2025-06-02 18:04:48.317164 | orchestrator | 2025-06-02 18:04:48.317177 | orchestrator | + [[ 1 -eq -1 ]] 2025-06-02 18:04:48.317189 | orchestrator | + echo 2025-06-02 18:04:48.317201 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-06-02 18:04:48.317214 | orchestrator | + echo 2025-06-02 18:04:48.317225 | orchestrator | + osism container testbed-node-1 ps 2025-06-02 18:04:50.544835 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-02 18:04:50.544947 | orchestrator | c72dc527b211 registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-06-02 18:04:50.544956 | orchestrator | 310acc0b1780 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-06-02 18:04:50.544961 | orchestrator | a1bbc970d542 registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-06-02 18:04:50.544965 | orchestrator | 4054bbeede54 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-06-02 18:04:50.544969 | orchestrator | 34de56a05fa9 registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530 "dumb-init --single-…" 5 minutes ago Up 4 minutes (healthy) octavia_api 2025-06-02 18:04:50.544973 | orchestrator | 9bb686068265 registry.osism.tech/kolla/release/grafana:12.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2025-06-02 18:04:50.544977 | orchestrator | 0d2096e346a5 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-06-02 18:04:50.544981 | orchestrator | 6ef717b54efb registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2025-06-02 18:04:50.544985 | orchestrator | 1106c13f1cfe registry.osism.tech/kolla/release/placement-api:12.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-06-02 18:04:50.544988 | orchestrator | d27bfa89fde6 registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) neutron_server 2025-06-02 18:04:50.544992 | orchestrator | 4eeba9562680 registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-06-02 18:04:50.545009 | orchestrator | adfd49340ecb registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-06-02 18:04:50.545014 | orchestrator | 6c2e45640a55 registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_novncproxy 2025-06-02 18:04:50.545017 | orchestrator | fafc0961d572 registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2025-06-02 18:04:50.545021 | orchestrator | 65fd4f19c776 registry.osism.tech/kolla/release/designate-central:19.0.1.20250530 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2025-06-02 18:04:50.545025 | orchestrator | 73855aeff857 registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) nova_conductor 2025-06-02 18:04:50.545035 | orchestrator | 2ce4b48841a5 registry.osism.tech/kolla/release/designate-api:19.0.1.20250530 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2025-06-02 18:04:50.545043 | orchestrator | 47373331fdad registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-06-02 18:04:50.545047 | orchestrator | 5ed07e72a064 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2025-06-02 18:04:50.545061 | orchestrator | 2d6ee69151d7 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2025-06-02 18:04:50.545065 | orchestrator | ff4aa40c9c87 registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2025-06-02 18:04:50.545069 | orchestrator | c7c52b1d00ba registry.osism.tech/kolla/release/nova-api:30.0.1.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) nova_api 2025-06-02 18:04:50.545073 | orchestrator | a7596ba0b968 registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530 "dumb-init --single-…" 12 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-06-02 18:04:50.545077 | orchestrator | 83fce2a536e1 registry.osism.tech/kolla/release/glance-api:29.0.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2025-06-02 18:04:50.545081 | orchestrator | 70becdd46857 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530 "dumb-init --single-…" 14 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2025-06-02 18:04:50.545086 | orchestrator | 12400d471c4d registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_scheduler 2025-06-02 18:04:50.545090 | orchestrator | bb2e8a97bcc2 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-06-02 18:04:50.545094 | orchestrator | 7b80d7dd6800 registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2025-06-02 18:04:50.545098 | orchestrator | 67e4a247f2f2 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2025-06-02 18:04:50.545106 | orchestrator | e79045368478 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2025-06-02 18:04:50.545110 | orchestrator | 580da6a7486e registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2025-06-02 18:04:50.545113 | orchestrator | 4dfba122fea9 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-1 2025-06-02 18:04:50.545117 | orchestrator | 065481e67731 registry.osism.tech/kolla/release/keystone:26.0.1.20250530 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2025-06-02 18:04:50.545121 | orchestrator | bf8db4401d75 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2025-06-02 18:04:50.545125 | orchestrator | ed38775fa95b registry.osism.tech/kolla/release/horizon:25.1.1.20250530 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) horizon 2025-06-02 18:04:50.545129 | orchestrator | 61e01a76ee0b registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2025-06-02 18:04:50.545132 | orchestrator | 3e2d2df7235a registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2025-06-02 18:04:50.545136 | orchestrator | a86452587b59 registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2025-06-02 18:04:50.545140 | orchestrator | 9d6c3f001a84 registry.osism.tech/kolla/release/opensearch:2.19.2.20250530 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch 2025-06-02 18:04:50.545146 | orchestrator | 317f013b3b4e registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-1 2025-06-02 18:04:50.545154 | orchestrator | f41a130e2252 registry.osism.tech/kolla/release/keepalived:2.2.7.20250530 "dumb-init --single-…" 23 minutes ago Up 23 minutes keepalived 2025-06-02 18:04:50.545158 | orchestrator | 2869159d103b registry.osism.tech/kolla/release/proxysql:2.7.3.20250530 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2025-06-02 18:04:50.545162 | orchestrator | 525f7c009fd1 registry.osism.tech/kolla/release/haproxy:2.6.12.20250530 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) haproxy 2025-06-02 18:04:50.545166 | orchestrator | d0681cb3518a registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_northd 2025-06-02 18:04:50.545169 | orchestrator | 84ed5b0b544e registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_sb_db 2025-06-02 18:04:50.545173 | orchestrator | 745c32793139 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_nb_db 2025-06-02 18:04:50.545177 | orchestrator | 699db0c0ea54 registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_controller 2025-06-02 18:04:50.545184 | orchestrator | 3f19055f66a0 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) rabbitmq 2025-06-02 18:04:50.545188 | orchestrator | d2f68413b6c8 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 27 minutes ago Up 27 minutes ceph-mon-testbed-node-1 2025-06-02 18:04:50.545192 | orchestrator | e7b231a5c930 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_vswitchd 2025-06-02 18:04:50.545196 | orchestrator | 4c59037509fb registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_db 2025-06-02 18:04:50.545199 | orchestrator | e0ec0f51679d registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis_sentinel 2025-06-02 18:04:50.545203 | orchestrator | a155aadebcb3 registry.osism.tech/kolla/release/redis:7.0.15.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis 2025-06-02 18:04:50.545207 | orchestrator | ed8f82f46363 registry.osism.tech/kolla/release/memcached:1.6.18.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) memcached 2025-06-02 18:04:50.545211 | orchestrator | bdb2194c9b55 registry.osism.tech/kolla/release/cron:3.0.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2025-06-02 18:04:50.545215 | orchestrator | 2235d7bd2c7f registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2025-06-02 18:04:50.545219 | orchestrator | e250ce3c0ca3 registry.osism.tech/kolla/release/fluentd:5.0.7.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes fluentd 2025-06-02 18:04:50.803581 | orchestrator | 2025-06-02 18:04:50.803685 | orchestrator | ## Images @ testbed-node-1 2025-06-02 18:04:50.803701 | orchestrator | 2025-06-02 18:04:50.803713 | orchestrator | + echo 2025-06-02 18:04:50.803725 | orchestrator | + echo '## Images @ testbed-node-1' 2025-06-02 18:04:50.803737 | orchestrator | + echo 2025-06-02 18:04:50.803748 | orchestrator | + osism container testbed-node-1 images 2025-06-02 18:04:52.999585 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-02 18:04:52.999668 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250530 174e220ad7bd 2 days ago 319MB 2025-06-02 18:04:52.999675 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250530 fc4477504c4f 2 days ago 319MB 2025-06-02 18:04:52.999681 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250530 e984e28a57b0 2 days ago 330MB 2025-06-02 18:04:52.999686 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250530 4cfdb500286b 2 days ago 1.59GB 2025-06-02 18:04:52.999692 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250530 6fcb2e3a907b 2 days ago 1.55GB 2025-06-02 18:04:52.999697 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250530 a15c96a3369b 2 days ago 419MB 2025-06-02 18:04:52.999702 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.4.1.20250530 33529d2e8ea7 2 days ago 747MB 2025-06-02 18:04:52.999707 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250530 6b32f249a415 2 days ago 376MB 2025-06-02 18:04:52.999712 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250530 e5b003449f46 2 days ago 327MB 2025-06-02 18:04:52.999718 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250530 a0c9ae28d2e7 2 days ago 629MB 2025-06-02 18:04:52.999740 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.1.20250530 a3fa8a6a4c8c 2 days ago 1.01GB 2025-06-02 18:04:52.999758 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250530 5a4e6980c376 2 days ago 591MB 2025-06-02 18:04:52.999764 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250530 acd5d7cf8545 2 days ago 354MB 2025-06-02 18:04:52.999769 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250530 528199032acc 2 days ago 352MB 2025-06-02 18:04:52.999774 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250530 b51a156bac81 2 days ago 411MB 2025-06-02 18:04:52.999779 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250530 1ba9b68ab0fa 2 days ago 345MB 2025-06-02 18:04:52.999784 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250530 a076e6a80bbc 2 days ago 359MB 2025-06-02 18:04:52.999789 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250530 4439f43e0847 2 days ago 325MB 2025-06-02 18:04:52.999794 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250530 854fb3fbb8d1 2 days ago 326MB 2025-06-02 18:04:52.999800 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250530 81218760d1ef 2 days ago 1.21GB 2025-06-02 18:04:52.999805 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250530 8775c34ea5d6 2 days ago 362MB 2025-06-02 18:04:52.999810 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250530 ebe56e768165 2 days ago 362MB 2025-06-02 18:04:52.999815 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250530 9ac54d9b8655 2 days ago 1.15GB 2025-06-02 18:04:52.999820 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250530 95e52651071a 2 days ago 1.04GB 2025-06-02 18:04:52.999825 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.1.1.20250530 47338d40fcbf 2 days ago 1.25GB 2025-06-02 18:04:52.999830 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250530 ecd3067dd808 2 days ago 1.2GB 2025-06-02 18:04:52.999835 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250530 95661613cfe8 2 days ago 1.31GB 2025-06-02 18:04:52.999841 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.1.20250530 41afac8ed4ba 2 days ago 1.12GB 2025-06-02 18:04:52.999846 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.1.20250530 816eaef08c5c 2 days ago 1.12GB 2025-06-02 18:04:52.999851 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.1.20250530 81c4f823534a 2 days ago 1.1GB 2025-06-02 18:04:52.999856 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.1.20250530 437ecd9dcceb 2 days ago 1.1GB 2025-06-02 18:04:52.999872 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.1.20250530 fd10912df5f8 2 days ago 1.1GB 2025-06-02 18:04:52.999878 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.1.1.20250530 8e97f769e43d 2 days ago 1.41GB 2025-06-02 18:04:52.999883 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.1.1.20250530 1a292444fc87 2 days ago 1.41GB 2025-06-02 18:04:52.999938 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250530 9186d487d48c 2 days ago 1.06GB 2025-06-02 18:04:52.999944 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250530 14234b919f18 2 days ago 1.06GB 2025-06-02 18:04:52.999954 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250530 57148ade6082 2 days ago 1.05GB 2025-06-02 18:04:52.999959 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250530 6d21806eb92e 2 days ago 1.05GB 2025-06-02 18:04:52.999964 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250530 d5f39127ee53 2 days ago 1.05GB 2025-06-02 18:04:52.999969 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250530 68be509d15c9 2 days ago 1.05GB 2025-06-02 18:04:52.999974 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250530 47425e7b5ce1 2 days ago 1.3GB 2025-06-02 18:04:52.999980 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250530 9fd4859cd2ca 2 days ago 1.29GB 2025-06-02 18:04:52.999985 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250530 65e1e2f12329 2 days ago 1.42GB 2025-06-02 18:04:52.999990 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250530 ded754c3e240 2 days ago 1.29GB 2025-06-02 18:04:52.999996 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250530 dc06d9c53ec5 2 days ago 1.06GB 2025-06-02 18:04:53.000001 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250530 450ccd1a2872 2 days ago 1.06GB 2025-06-02 18:04:53.000006 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250530 2f34913753bd 2 days ago 1.06GB 2025-06-02 18:04:53.000011 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250530 fe53c77abc4a 2 days ago 1.11GB 2025-06-02 18:04:53.000017 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250530 0419c85d82ab 2 days ago 1.13GB 2025-06-02 18:04:53.000022 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250530 7eb5295204d1 2 days ago 1.11GB 2025-06-02 18:04:53.000027 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250530 6a22761bd4f3 2 days ago 947MB 2025-06-02 18:04:53.000032 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250530 694606382374 2 days ago 948MB 2025-06-02 18:04:53.000037 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250530 63ebc77afae1 2 days ago 947MB 2025-06-02 18:04:53.000042 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250530 5b8b94e53819 2 days ago 948MB 2025-06-02 18:04:53.000047 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 3 weeks ago 1.27GB 2025-06-02 18:04:53.323584 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-02 18:04:53.324080 | orchestrator | ++ semver 9.1.0 5.0.0 2025-06-02 18:04:53.388798 | orchestrator | 2025-06-02 18:04:53.388967 | orchestrator | ## Containers @ testbed-node-2 2025-06-02 18:04:53.388988 | orchestrator | 2025-06-02 18:04:53.389000 | orchestrator | + [[ 1 -eq -1 ]] 2025-06-02 18:04:53.389012 | orchestrator | + echo 2025-06-02 18:04:53.389046 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-06-02 18:04:53.389059 | orchestrator | + echo 2025-06-02 18:04:53.389070 | orchestrator | + osism container testbed-node-2 ps 2025-06-02 18:04:55.549253 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-02 18:04:55.549347 | orchestrator | 24fca2ff086b registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-06-02 18:04:55.549358 | orchestrator | d69cbbbf5db5 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-06-02 18:04:55.549385 | orchestrator | f35c5194e21b registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-06-02 18:04:55.549393 | orchestrator | 8250558b4435 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-06-02 18:04:55.549401 | orchestrator | d9209afb00c1 registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-06-02 18:04:55.549410 | orchestrator | 947f92b70f23 registry.osism.tech/kolla/release/grafana:12.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2025-06-02 18:04:55.549418 | orchestrator | 3e9aa2001bc4 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-06-02 18:04:55.549426 | orchestrator | 68cdc6743084 registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2025-06-02 18:04:55.549434 | orchestrator | 8a0e174c940f registry.osism.tech/kolla/release/placement-api:12.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-06-02 18:04:55.549442 | orchestrator | 2617e85fa1a9 registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) neutron_server 2025-06-02 18:04:55.549450 | orchestrator | ab392183f0ae registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-06-02 18:04:55.549458 | orchestrator | 23edf5ca0923 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_mdns 2025-06-02 18:04:55.549466 | orchestrator | 551c48e8e2e3 registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_novncproxy 2025-06-02 18:04:55.549474 | orchestrator | 3416eb72bcad registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2025-06-02 18:04:55.549482 | orchestrator | a30f21d90087 registry.osism.tech/kolla/release/designate-central:19.0.1.20250530 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2025-06-02 18:04:55.549490 | orchestrator | bd043740a273 registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) nova_conductor 2025-06-02 18:04:55.549498 | orchestrator | e39cbec569a5 registry.osism.tech/kolla/release/designate-api:19.0.1.20250530 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2025-06-02 18:04:55.549506 | orchestrator | 3052c4eb3711 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-06-02 18:04:55.549514 | orchestrator | 371046ded94c registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2025-06-02 18:04:55.549539 | orchestrator | 1d5187d3af18 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2025-06-02 18:04:55.549553 | orchestrator | 84c09f10c629 registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2025-06-02 18:04:55.549562 | orchestrator | f6dd88393c16 registry.osism.tech/kolla/release/nova-api:30.0.1.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) nova_api 2025-06-02 18:04:55.549570 | orchestrator | e4c96a9eff7c registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530 "dumb-init --single-…" 12 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-06-02 18:04:55.549578 | orchestrator | f64a4f74b962 registry.osism.tech/kolla/release/glance-api:29.0.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2025-06-02 18:04:55.549586 | orchestrator | 065cfd7a4f6c registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2025-06-02 18:04:55.549596 | orchestrator | bc55d569a690 registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_scheduler 2025-06-02 18:04:55.549604 | orchestrator | be4801f48eca registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-06-02 18:04:55.549613 | orchestrator | 352ab356f4cc registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2025-06-02 18:04:55.549622 | orchestrator | c90e9b35bc6a registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2025-06-02 18:04:55.549630 | orchestrator | c0442821f57a registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2025-06-02 18:04:55.549645 | orchestrator | d1db5e139a8e registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2025-06-02 18:04:55.549654 | orchestrator | 03157bea98be registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-2 2025-06-02 18:04:55.549662 | orchestrator | 3a6afe4acad7 registry.osism.tech/kolla/release/keystone:26.0.1.20250530 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2025-06-02 18:04:55.549670 | orchestrator | 60e063c36721 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2025-06-02 18:04:55.549679 | orchestrator | ba447f0eb318 registry.osism.tech/kolla/release/horizon:25.1.1.20250530 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) horizon 2025-06-02 18:04:55.549687 | orchestrator | 1b2ff2592d51 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2025-06-02 18:04:55.549696 | orchestrator | 6d28c8b5cd69 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2025-06-02 18:04:55.549708 | orchestrator | a641f73ac506 registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530 "dumb-init -- kolla_…" 20 minutes ago Up 20 minutes (healthy) mariadb 2025-06-02 18:04:55.549725 | orchestrator | 341c2eeebb59 registry.osism.tech/kolla/release/opensearch:2.19.2.20250530 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch 2025-06-02 18:04:55.549734 | orchestrator | 2d45401d503a registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 23 minutes ago Up 22 minutes ceph-crash-testbed-node-2 2025-06-02 18:04:55.549752 | orchestrator | 289384a552f3 registry.osism.tech/kolla/release/keepalived:2.2.7.20250530 "dumb-init --single-…" 23 minutes ago Up 23 minutes keepalived 2025-06-02 18:04:55.549760 | orchestrator | 8d5008054274 registry.osism.tech/kolla/release/proxysql:2.7.3.20250530 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2025-06-02 18:04:55.549768 | orchestrator | 7d724e1aaa38 registry.osism.tech/kolla/release/haproxy:2.6.12.20250530 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) haproxy 2025-06-02 18:04:55.549776 | orchestrator | 3f6c7bcb4b0b registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_northd 2025-06-02 18:04:55.549784 | orchestrator | db76112f7605 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_sb_db 2025-06-02 18:04:55.549792 | orchestrator | 72235f94163b registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_nb_db 2025-06-02 18:04:55.549801 | orchestrator | 4e856635b190 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) rabbitmq 2025-06-02 18:04:55.549811 | orchestrator | a8cc72394222 registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_controller 2025-06-02 18:04:55.549820 | orchestrator | d97888002843 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 27 minutes ago Up 27 minutes ceph-mon-testbed-node-2 2025-06-02 18:04:55.549829 | orchestrator | 1f56e86624b0 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_vswitchd 2025-06-02 18:04:55.549838 | orchestrator | e36233516523 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_db 2025-06-02 18:04:55.549847 | orchestrator | 73ce1eac345f registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis_sentinel 2025-06-02 18:04:55.549857 | orchestrator | 616357a27604 registry.osism.tech/kolla/release/redis:7.0.15.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis 2025-06-02 18:04:55.549866 | orchestrator | 91d0aae5023e registry.osism.tech/kolla/release/memcached:1.6.18.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) memcached 2025-06-02 18:04:55.549876 | orchestrator | 91f6a83057c4 registry.osism.tech/kolla/release/cron:3.0.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2025-06-02 18:04:55.549925 | orchestrator | b572ae1d6556 registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-06-02 18:04:55.549936 | orchestrator | 775f22b96ead registry.osism.tech/kolla/release/fluentd:5.0.7.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes fluentd 2025-06-02 18:04:55.826446 | orchestrator | 2025-06-02 18:04:55.826518 | orchestrator | ## Images @ testbed-node-2 2025-06-02 18:04:55.826526 | orchestrator | 2025-06-02 18:04:55.826531 | orchestrator | + echo 2025-06-02 18:04:55.826536 | orchestrator | + echo '## Images @ testbed-node-2' 2025-06-02 18:04:55.826541 | orchestrator | + echo 2025-06-02 18:04:55.826545 | orchestrator | + osism container testbed-node-2 images 2025-06-02 18:04:57.931585 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-02 18:04:57.931697 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250530 174e220ad7bd 2 days ago 319MB 2025-06-02 18:04:57.931709 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250530 fc4477504c4f 2 days ago 319MB 2025-06-02 18:04:57.931719 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250530 e984e28a57b0 2 days ago 330MB 2025-06-02 18:04:57.931728 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250530 4cfdb500286b 2 days ago 1.59GB 2025-06-02 18:04:57.931737 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250530 6fcb2e3a907b 2 days ago 1.55GB 2025-06-02 18:04:57.931746 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250530 a15c96a3369b 2 days ago 419MB 2025-06-02 18:04:57.931755 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.4.1.20250530 33529d2e8ea7 2 days ago 747MB 2025-06-02 18:04:57.931764 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250530 e5b003449f46 2 days ago 327MB 2025-06-02 18:04:57.931773 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250530 6b32f249a415 2 days ago 376MB 2025-06-02 18:04:57.931782 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250530 a0c9ae28d2e7 2 days ago 629MB 2025-06-02 18:04:57.931790 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.1.20250530 a3fa8a6a4c8c 2 days ago 1.01GB 2025-06-02 18:04:57.931799 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250530 5a4e6980c376 2 days ago 591MB 2025-06-02 18:04:57.931807 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250530 acd5d7cf8545 2 days ago 354MB 2025-06-02 18:04:57.931816 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250530 b51a156bac81 2 days ago 411MB 2025-06-02 18:04:57.931842 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250530 528199032acc 2 days ago 352MB 2025-06-02 18:04:57.931852 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250530 1ba9b68ab0fa 2 days ago 345MB 2025-06-02 18:04:57.931861 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250530 a076e6a80bbc 2 days ago 359MB 2025-06-02 18:04:57.931870 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250530 4439f43e0847 2 days ago 325MB 2025-06-02 18:04:57.931878 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250530 854fb3fbb8d1 2 days ago 326MB 2025-06-02 18:04:57.931938 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250530 81218760d1ef 2 days ago 1.21GB 2025-06-02 18:04:57.931948 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250530 8775c34ea5d6 2 days ago 362MB 2025-06-02 18:04:57.931956 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250530 ebe56e768165 2 days ago 362MB 2025-06-02 18:04:57.931965 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250530 9ac54d9b8655 2 days ago 1.15GB 2025-06-02 18:04:57.931993 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250530 95e52651071a 2 days ago 1.04GB 2025-06-02 18:04:57.932002 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.1.1.20250530 47338d40fcbf 2 days ago 1.25GB 2025-06-02 18:04:57.932010 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250530 ecd3067dd808 2 days ago 1.2GB 2025-06-02 18:04:57.932019 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250530 95661613cfe8 2 days ago 1.31GB 2025-06-02 18:04:57.932028 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.1.20250530 41afac8ed4ba 2 days ago 1.12GB 2025-06-02 18:04:57.932036 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.1.20250530 816eaef08c5c 2 days ago 1.12GB 2025-06-02 18:04:57.932045 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.1.20250530 81c4f823534a 2 days ago 1.1GB 2025-06-02 18:04:57.932053 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.1.20250530 437ecd9dcceb 2 days ago 1.1GB 2025-06-02 18:04:57.932079 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.1.20250530 fd10912df5f8 2 days ago 1.1GB 2025-06-02 18:04:57.932089 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.1.1.20250530 8e97f769e43d 2 days ago 1.41GB 2025-06-02 18:04:57.932098 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.1.1.20250530 1a292444fc87 2 days ago 1.41GB 2025-06-02 18:04:57.932108 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250530 9186d487d48c 2 days ago 1.06GB 2025-06-02 18:04:57.932119 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250530 14234b919f18 2 days ago 1.06GB 2025-06-02 18:04:57.932129 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250530 57148ade6082 2 days ago 1.05GB 2025-06-02 18:04:57.932139 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250530 6d21806eb92e 2 days ago 1.05GB 2025-06-02 18:04:57.932154 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250530 d5f39127ee53 2 days ago 1.05GB 2025-06-02 18:04:57.932165 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250530 68be509d15c9 2 days ago 1.05GB 2025-06-02 18:04:57.932174 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250530 47425e7b5ce1 2 days ago 1.3GB 2025-06-02 18:04:57.932185 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250530 9fd4859cd2ca 2 days ago 1.29GB 2025-06-02 18:04:57.932195 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250530 65e1e2f12329 2 days ago 1.42GB 2025-06-02 18:04:57.932206 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250530 ded754c3e240 2 days ago 1.29GB 2025-06-02 18:04:57.932216 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250530 dc06d9c53ec5 2 days ago 1.06GB 2025-06-02 18:04:57.932226 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250530 450ccd1a2872 2 days ago 1.06GB 2025-06-02 18:04:57.932236 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250530 2f34913753bd 2 days ago 1.06GB 2025-06-02 18:04:57.932245 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250530 fe53c77abc4a 2 days ago 1.11GB 2025-06-02 18:04:57.932254 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250530 0419c85d82ab 2 days ago 1.13GB 2025-06-02 18:04:57.932269 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250530 7eb5295204d1 2 days ago 1.11GB 2025-06-02 18:04:57.932278 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250530 6a22761bd4f3 2 days ago 947MB 2025-06-02 18:04:57.932287 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250530 63ebc77afae1 2 days ago 947MB 2025-06-02 18:04:57.932295 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250530 694606382374 2 days ago 948MB 2025-06-02 18:04:57.932304 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250530 5b8b94e53819 2 days ago 948MB 2025-06-02 18:04:57.932313 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 3 weeks ago 1.27GB 2025-06-02 18:04:58.197546 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-06-02 18:04:58.206184 | orchestrator | + set -e 2025-06-02 18:04:58.206290 | orchestrator | + source /opt/manager-vars.sh 2025-06-02 18:04:58.207304 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-02 18:04:58.207345 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-02 18:04:58.207357 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-02 18:04:58.207368 | orchestrator | ++ CEPH_VERSION=reef 2025-06-02 18:04:58.207379 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-02 18:04:58.207392 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-02 18:04:58.207403 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-02 18:04:58.207414 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-02 18:04:58.207425 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-02 18:04:58.207436 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-02 18:04:58.207447 | orchestrator | ++ export ARA=false 2025-06-02 18:04:58.207458 | orchestrator | ++ ARA=false 2025-06-02 18:04:58.207504 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-02 18:04:58.207516 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-02 18:04:58.207527 | orchestrator | ++ export TEMPEST=false 2025-06-02 18:04:58.207537 | orchestrator | ++ TEMPEST=false 2025-06-02 18:04:58.207548 | orchestrator | ++ export IS_ZUUL=true 2025-06-02 18:04:58.207559 | orchestrator | ++ IS_ZUUL=true 2025-06-02 18:04:58.207575 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.157 2025-06-02 18:04:58.207587 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.157 2025-06-02 18:04:58.207599 | orchestrator | ++ export EXTERNAL_API=false 2025-06-02 18:04:58.207610 | orchestrator | ++ EXTERNAL_API=false 2025-06-02 18:04:58.207621 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-02 18:04:58.207631 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-02 18:04:58.207643 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-02 18:04:58.207654 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-02 18:04:58.207665 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-02 18:04:58.207744 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-02 18:04:58.207759 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-06-02 18:04:58.207771 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-06-02 18:04:58.216120 | orchestrator | + set -e 2025-06-02 18:04:58.216165 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-02 18:04:58.216177 | orchestrator | ++ export INTERACTIVE=false 2025-06-02 18:04:58.216189 | orchestrator | ++ INTERACTIVE=false 2025-06-02 18:04:58.216200 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-02 18:04:58.216211 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-02 18:04:58.216222 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-06-02 18:04:58.217121 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-06-02 18:04:58.220960 | orchestrator | 2025-06-02 18:04:58.221039 | orchestrator | # Ceph status 2025-06-02 18:04:58.221055 | orchestrator | 2025-06-02 18:04:58.221067 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-02 18:04:58.221079 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-02 18:04:58.221091 | orchestrator | + echo 2025-06-02 18:04:58.221102 | orchestrator | + echo '# Ceph status' 2025-06-02 18:04:58.221113 | orchestrator | + echo 2025-06-02 18:04:58.221124 | orchestrator | + ceph -s 2025-06-02 18:04:58.867242 | orchestrator | cluster: 2025-06-02 18:04:58.867337 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-06-02 18:04:58.867350 | orchestrator | health: HEALTH_OK 2025-06-02 18:04:58.867360 | orchestrator | 2025-06-02 18:04:58.867368 | orchestrator | services: 2025-06-02 18:04:58.867377 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 27m) 2025-06-02 18:04:58.867409 | orchestrator | mgr: testbed-node-0(active, since 15m), standbys: testbed-node-2, testbed-node-1 2025-06-02 18:04:58.867418 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-06-02 18:04:58.867426 | orchestrator | osd: 6 osds: 6 up (since 23m), 6 in (since 24m) 2025-06-02 18:04:58.867434 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-06-02 18:04:58.867442 | orchestrator | 2025-06-02 18:04:58.867450 | orchestrator | data: 2025-06-02 18:04:58.867458 | orchestrator | volumes: 1/1 healthy 2025-06-02 18:04:58.867466 | orchestrator | pools: 14 pools, 401 pgs 2025-06-02 18:04:58.867475 | orchestrator | objects: 524 objects, 2.2 GiB 2025-06-02 18:04:58.867483 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2025-06-02 18:04:58.867491 | orchestrator | pgs: 401 active+clean 2025-06-02 18:04:58.867499 | orchestrator | 2025-06-02 18:04:58.910837 | orchestrator | 2025-06-02 18:04:58.910979 | orchestrator | # Ceph versions 2025-06-02 18:04:58.910990 | orchestrator | 2025-06-02 18:04:58.910998 | orchestrator | + echo 2025-06-02 18:04:58.911006 | orchestrator | + echo '# Ceph versions' 2025-06-02 18:04:58.911014 | orchestrator | + echo 2025-06-02 18:04:58.911022 | orchestrator | + ceph versions 2025-06-02 18:04:59.518210 | orchestrator | { 2025-06-02 18:04:59.518310 | orchestrator | "mon": { 2025-06-02 18:04:59.518324 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-02 18:04:59.518335 | orchestrator | }, 2025-06-02 18:04:59.518345 | orchestrator | "mgr": { 2025-06-02 18:04:59.518354 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-02 18:04:59.518362 | orchestrator | }, 2025-06-02 18:04:59.518371 | orchestrator | "osd": { 2025-06-02 18:04:59.518380 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2025-06-02 18:04:59.518389 | orchestrator | }, 2025-06-02 18:04:59.518398 | orchestrator | "mds": { 2025-06-02 18:04:59.518407 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-02 18:04:59.518416 | orchestrator | }, 2025-06-02 18:04:59.518425 | orchestrator | "rgw": { 2025-06-02 18:04:59.518434 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-02 18:04:59.518442 | orchestrator | }, 2025-06-02 18:04:59.518451 | orchestrator | "overall": { 2025-06-02 18:04:59.518461 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2025-06-02 18:04:59.518470 | orchestrator | } 2025-06-02 18:04:59.518479 | orchestrator | } 2025-06-02 18:04:59.574843 | orchestrator | 2025-06-02 18:04:59.575023 | orchestrator | # Ceph OSD tree 2025-06-02 18:04:59.575038 | orchestrator | 2025-06-02 18:04:59.575050 | orchestrator | + echo 2025-06-02 18:04:59.575061 | orchestrator | + echo '# Ceph OSD tree' 2025-06-02 18:04:59.575073 | orchestrator | + echo 2025-06-02 18:04:59.575084 | orchestrator | + ceph osd df tree 2025-06-02 18:05:00.099463 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-06-02 18:05:00.099560 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2025-06-02 18:05:00.099569 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-3 2025-06-02 18:05:00.099575 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 70 MiB 19 GiB 6.57 1.11 201 up osd.0 2025-06-02 18:05:00.099581 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1003 MiB 1 KiB 74 MiB 19 GiB 5.26 0.89 189 up osd.5 2025-06-02 18:05:00.099589 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2025-06-02 18:05:00.099596 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.3 GiB 1 KiB 70 MiB 19 GiB 6.65 1.12 198 up osd.2 2025-06-02 18:05:00.099602 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.0 GiB 987 MiB 1 KiB 74 MiB 19 GiB 5.18 0.88 190 up osd.4 2025-06-02 18:05:00.099608 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2025-06-02 18:05:00.099614 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 973 MiB 899 MiB 1 KiB 74 MiB 19 GiB 4.76 0.80 176 up osd.1 2025-06-02 18:05:00.099639 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 70 MiB 19 GiB 7.08 1.20 216 up osd.3 2025-06-02 18:05:00.099646 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2025-06-02 18:05:00.099652 | orchestrator | MIN/MAX VAR: 0.80/1.20 STDDEV: 0.88 2025-06-02 18:05:00.140696 | orchestrator | 2025-06-02 18:05:00.140782 | orchestrator | # Ceph monitor status 2025-06-02 18:05:00.140792 | orchestrator | 2025-06-02 18:05:00.140799 | orchestrator | + echo 2025-06-02 18:05:00.140806 | orchestrator | + echo '# Ceph monitor status' 2025-06-02 18:05:00.140812 | orchestrator | + echo 2025-06-02 18:05:00.140817 | orchestrator | + ceph mon stat 2025-06-02 18:05:00.772110 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 6, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-06-02 18:05:00.847742 | orchestrator | 2025-06-02 18:05:00.847962 | orchestrator | # Ceph quorum status 2025-06-02 18:05:00.847989 | orchestrator | 2025-06-02 18:05:00.848002 | orchestrator | + echo 2025-06-02 18:05:00.848014 | orchestrator | + echo '# Ceph quorum status' 2025-06-02 18:05:00.848026 | orchestrator | + echo 2025-06-02 18:05:00.848113 | orchestrator | + ceph quorum_status 2025-06-02 18:05:00.848129 | orchestrator | + jq 2025-06-02 18:05:01.512397 | orchestrator | { 2025-06-02 18:05:01.512504 | orchestrator | "election_epoch": 6, 2025-06-02 18:05:01.512521 | orchestrator | "quorum": [ 2025-06-02 18:05:01.512533 | orchestrator | 0, 2025-06-02 18:05:01.512545 | orchestrator | 1, 2025-06-02 18:05:01.512556 | orchestrator | 2 2025-06-02 18:05:01.512567 | orchestrator | ], 2025-06-02 18:05:01.512661 | orchestrator | "quorum_names": [ 2025-06-02 18:05:01.512675 | orchestrator | "testbed-node-0", 2025-06-02 18:05:01.512686 | orchestrator | "testbed-node-1", 2025-06-02 18:05:01.512697 | orchestrator | "testbed-node-2" 2025-06-02 18:05:01.512708 | orchestrator | ], 2025-06-02 18:05:01.512720 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-06-02 18:05:01.512732 | orchestrator | "quorum_age": 1663, 2025-06-02 18:05:01.512743 | orchestrator | "features": { 2025-06-02 18:05:01.512754 | orchestrator | "quorum_con": "4540138322906710015", 2025-06-02 18:05:01.512765 | orchestrator | "quorum_mon": [ 2025-06-02 18:05:01.512777 | orchestrator | "kraken", 2025-06-02 18:05:01.512788 | orchestrator | "luminous", 2025-06-02 18:05:01.512799 | orchestrator | "mimic", 2025-06-02 18:05:01.512810 | orchestrator | "osdmap-prune", 2025-06-02 18:05:01.512821 | orchestrator | "nautilus", 2025-06-02 18:05:01.512832 | orchestrator | "octopus", 2025-06-02 18:05:01.512843 | orchestrator | "pacific", 2025-06-02 18:05:01.512853 | orchestrator | "elector-pinging", 2025-06-02 18:05:01.512864 | orchestrator | "quincy", 2025-06-02 18:05:01.512875 | orchestrator | "reef" 2025-06-02 18:05:01.512954 | orchestrator | ] 2025-06-02 18:05:01.512969 | orchestrator | }, 2025-06-02 18:05:01.512982 | orchestrator | "monmap": { 2025-06-02 18:05:01.512995 | orchestrator | "epoch": 1, 2025-06-02 18:05:01.513010 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-06-02 18:05:01.513023 | orchestrator | "modified": "2025-06-02T17:36:55.091252Z", 2025-06-02 18:05:01.513036 | orchestrator | "created": "2025-06-02T17:36:55.091252Z", 2025-06-02 18:05:01.513049 | orchestrator | "min_mon_release": 18, 2025-06-02 18:05:01.513063 | orchestrator | "min_mon_release_name": "reef", 2025-06-02 18:05:01.513075 | orchestrator | "election_strategy": 1, 2025-06-02 18:05:01.513088 | orchestrator | "disallowed_leaders: ": "", 2025-06-02 18:05:01.513101 | orchestrator | "stretch_mode": false, 2025-06-02 18:05:01.513114 | orchestrator | "tiebreaker_mon": "", 2025-06-02 18:05:01.513127 | orchestrator | "removed_ranks: ": "", 2025-06-02 18:05:01.513140 | orchestrator | "features": { 2025-06-02 18:05:01.513154 | orchestrator | "persistent": [ 2025-06-02 18:05:01.513167 | orchestrator | "kraken", 2025-06-02 18:05:01.513179 | orchestrator | "luminous", 2025-06-02 18:05:01.513190 | orchestrator | "mimic", 2025-06-02 18:05:01.513200 | orchestrator | "osdmap-prune", 2025-06-02 18:05:01.513211 | orchestrator | "nautilus", 2025-06-02 18:05:01.513222 | orchestrator | "octopus", 2025-06-02 18:05:01.513233 | orchestrator | "pacific", 2025-06-02 18:05:01.513243 | orchestrator | "elector-pinging", 2025-06-02 18:05:01.513254 | orchestrator | "quincy", 2025-06-02 18:05:01.513301 | orchestrator | "reef" 2025-06-02 18:05:01.513321 | orchestrator | ], 2025-06-02 18:05:01.513377 | orchestrator | "optional": [] 2025-06-02 18:05:01.513396 | orchestrator | }, 2025-06-02 18:05:01.513408 | orchestrator | "mons": [ 2025-06-02 18:05:01.513419 | orchestrator | { 2025-06-02 18:05:01.513430 | orchestrator | "rank": 0, 2025-06-02 18:05:01.513441 | orchestrator | "name": "testbed-node-0", 2025-06-02 18:05:01.513452 | orchestrator | "public_addrs": { 2025-06-02 18:05:01.513464 | orchestrator | "addrvec": [ 2025-06-02 18:05:01.513475 | orchestrator | { 2025-06-02 18:05:01.513486 | orchestrator | "type": "v2", 2025-06-02 18:05:01.513497 | orchestrator | "addr": "192.168.16.10:3300", 2025-06-02 18:05:01.513508 | orchestrator | "nonce": 0 2025-06-02 18:05:01.513519 | orchestrator | }, 2025-06-02 18:05:01.513530 | orchestrator | { 2025-06-02 18:05:01.513540 | orchestrator | "type": "v1", 2025-06-02 18:05:01.513551 | orchestrator | "addr": "192.168.16.10:6789", 2025-06-02 18:05:01.513562 | orchestrator | "nonce": 0 2025-06-02 18:05:01.513573 | orchestrator | } 2025-06-02 18:05:01.513584 | orchestrator | ] 2025-06-02 18:05:01.513595 | orchestrator | }, 2025-06-02 18:05:01.513606 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-06-02 18:05:01.513617 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-06-02 18:05:01.513628 | orchestrator | "priority": 0, 2025-06-02 18:05:01.513639 | orchestrator | "weight": 0, 2025-06-02 18:05:01.513650 | orchestrator | "crush_location": "{}" 2025-06-02 18:05:01.513661 | orchestrator | }, 2025-06-02 18:05:01.513672 | orchestrator | { 2025-06-02 18:05:01.513683 | orchestrator | "rank": 1, 2025-06-02 18:05:01.513694 | orchestrator | "name": "testbed-node-1", 2025-06-02 18:05:01.513705 | orchestrator | "public_addrs": { 2025-06-02 18:05:01.513716 | orchestrator | "addrvec": [ 2025-06-02 18:05:01.513727 | orchestrator | { 2025-06-02 18:05:01.513738 | orchestrator | "type": "v2", 2025-06-02 18:05:01.513749 | orchestrator | "addr": "192.168.16.11:3300", 2025-06-02 18:05:01.513760 | orchestrator | "nonce": 0 2025-06-02 18:05:01.513771 | orchestrator | }, 2025-06-02 18:05:01.513782 | orchestrator | { 2025-06-02 18:05:01.513792 | orchestrator | "type": "v1", 2025-06-02 18:05:01.513803 | orchestrator | "addr": "192.168.16.11:6789", 2025-06-02 18:05:01.513814 | orchestrator | "nonce": 0 2025-06-02 18:05:01.513825 | orchestrator | } 2025-06-02 18:05:01.513836 | orchestrator | ] 2025-06-02 18:05:01.513847 | orchestrator | }, 2025-06-02 18:05:01.513858 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-06-02 18:05:01.513868 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-06-02 18:05:01.513904 | orchestrator | "priority": 0, 2025-06-02 18:05:01.513916 | orchestrator | "weight": 0, 2025-06-02 18:05:01.513928 | orchestrator | "crush_location": "{}" 2025-06-02 18:05:01.514010 | orchestrator | }, 2025-06-02 18:05:01.514086 | orchestrator | { 2025-06-02 18:05:01.514098 | orchestrator | "rank": 2, 2025-06-02 18:05:01.514109 | orchestrator | "name": "testbed-node-2", 2025-06-02 18:05:01.514120 | orchestrator | "public_addrs": { 2025-06-02 18:05:01.514131 | orchestrator | "addrvec": [ 2025-06-02 18:05:01.514143 | orchestrator | { 2025-06-02 18:05:01.514153 | orchestrator | "type": "v2", 2025-06-02 18:05:01.514165 | orchestrator | "addr": "192.168.16.12:3300", 2025-06-02 18:05:01.514176 | orchestrator | "nonce": 0 2025-06-02 18:05:01.514187 | orchestrator | }, 2025-06-02 18:05:01.514198 | orchestrator | { 2025-06-02 18:05:01.514209 | orchestrator | "type": "v1", 2025-06-02 18:05:01.514220 | orchestrator | "addr": "192.168.16.12:6789", 2025-06-02 18:05:01.514231 | orchestrator | "nonce": 0 2025-06-02 18:05:01.514242 | orchestrator | } 2025-06-02 18:05:01.514253 | orchestrator | ] 2025-06-02 18:05:01.514264 | orchestrator | }, 2025-06-02 18:05:01.514275 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-06-02 18:05:01.514286 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-06-02 18:05:01.514297 | orchestrator | "priority": 0, 2025-06-02 18:05:01.514308 | orchestrator | "weight": 0, 2025-06-02 18:05:01.514325 | orchestrator | "crush_location": "{}" 2025-06-02 18:05:01.514344 | orchestrator | } 2025-06-02 18:05:01.514362 | orchestrator | ] 2025-06-02 18:05:01.514382 | orchestrator | } 2025-06-02 18:05:01.514402 | orchestrator | } 2025-06-02 18:05:01.514442 | orchestrator | 2025-06-02 18:05:01.514461 | orchestrator | # Ceph free space status 2025-06-02 18:05:01.514474 | orchestrator | 2025-06-02 18:05:01.514497 | orchestrator | + echo 2025-06-02 18:05:01.514509 | orchestrator | + echo '# Ceph free space status' 2025-06-02 18:05:01.514520 | orchestrator | + echo 2025-06-02 18:05:01.514531 | orchestrator | + ceph df 2025-06-02 18:05:02.121560 | orchestrator | --- RAW STORAGE --- 2025-06-02 18:05:02.121638 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-06-02 18:05:02.121653 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-06-02 18:05:02.121658 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-06-02 18:05:02.121662 | orchestrator | 2025-06-02 18:05:02.121667 | orchestrator | --- POOLS --- 2025-06-02 18:05:02.121672 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-06-02 18:05:02.121678 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2025-06-02 18:05:02.121683 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-06-02 18:05:02.121688 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2025-06-02 18:05:02.121692 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-06-02 18:05:02.121696 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-06-02 18:05:02.121700 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-06-02 18:05:02.121704 | orchestrator | default.rgw.log 7 32 3.6 KiB 177 408 KiB 0 35 GiB 2025-06-02 18:05:02.121709 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-06-02 18:05:02.121713 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 53 GiB 2025-06-02 18:05:02.121717 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2025-06-02 18:05:02.121721 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2025-06-02 18:05:02.121725 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.94 35 GiB 2025-06-02 18:05:02.121729 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2025-06-02 18:05:02.121733 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2025-06-02 18:05:02.167284 | orchestrator | ++ semver 9.1.0 5.0.0 2025-06-02 18:05:02.222478 | orchestrator | + [[ 1 -eq -1 ]] 2025-06-02 18:05:02.222600 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-06-02 18:05:02.222623 | orchestrator | + osism apply facts 2025-06-02 18:05:04.033136 | orchestrator | Registering Redlock._acquired_script 2025-06-02 18:05:04.033266 | orchestrator | Registering Redlock._extend_script 2025-06-02 18:05:04.033284 | orchestrator | Registering Redlock._release_script 2025-06-02 18:05:04.106738 | orchestrator | 2025-06-02 18:05:04 | INFO  | Task e8c204a2-fc72-4458-9dd4-44b4609adac3 (facts) was prepared for execution. 2025-06-02 18:05:04.106816 | orchestrator | 2025-06-02 18:05:04 | INFO  | It takes a moment until task e8c204a2-fc72-4458-9dd4-44b4609adac3 (facts) has been started and output is visible here. 2025-06-02 18:05:08.318338 | orchestrator | 2025-06-02 18:05:08.318753 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-06-02 18:05:08.320193 | orchestrator | 2025-06-02 18:05:08.321003 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-02 18:05:08.323179 | orchestrator | Monday 02 June 2025 18:05:08 +0000 (0:00:00.303) 0:00:00.303 *********** 2025-06-02 18:05:09.857262 | orchestrator | ok: [testbed-manager] 2025-06-02 18:05:09.858186 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:05:09.864699 | orchestrator | ok: [testbed-node-1] 2025-06-02 18:05:09.866495 | orchestrator | ok: [testbed-node-2] 2025-06-02 18:05:09.866581 | orchestrator | ok: [testbed-node-3] 2025-06-02 18:05:09.867458 | orchestrator | ok: [testbed-node-4] 2025-06-02 18:05:09.867867 | orchestrator | ok: [testbed-node-5] 2025-06-02 18:05:09.868512 | orchestrator | 2025-06-02 18:05:09.872503 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-02 18:05:09.872936 | orchestrator | Monday 02 June 2025 18:05:09 +0000 (0:00:01.538) 0:00:01.841 *********** 2025-06-02 18:05:10.039831 | orchestrator | skipping: [testbed-manager] 2025-06-02 18:05:10.130549 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:05:10.211605 | orchestrator | skipping: [testbed-node-1] 2025-06-02 18:05:10.287169 | orchestrator | skipping: [testbed-node-2] 2025-06-02 18:05:10.368244 | orchestrator | skipping: [testbed-node-3] 2025-06-02 18:05:11.157793 | orchestrator | skipping: [testbed-node-4] 2025-06-02 18:05:11.158347 | orchestrator | skipping: [testbed-node-5] 2025-06-02 18:05:11.160607 | orchestrator | 2025-06-02 18:05:11.162857 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-02 18:05:11.166192 | orchestrator | 2025-06-02 18:05:11.167294 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-02 18:05:11.168205 | orchestrator | Monday 02 June 2025 18:05:11 +0000 (0:00:01.305) 0:00:03.147 *********** 2025-06-02 18:05:16.388211 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:05:16.388972 | orchestrator | ok: [testbed-node-2] 2025-06-02 18:05:16.389329 | orchestrator | ok: [testbed-node-1] 2025-06-02 18:05:16.390938 | orchestrator | ok: [testbed-manager] 2025-06-02 18:05:16.391288 | orchestrator | ok: [testbed-node-3] 2025-06-02 18:05:16.392292 | orchestrator | ok: [testbed-node-4] 2025-06-02 18:05:16.393130 | orchestrator | ok: [testbed-node-5] 2025-06-02 18:05:16.393859 | orchestrator | 2025-06-02 18:05:16.397573 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-02 18:05:16.398103 | orchestrator | 2025-06-02 18:05:16.398982 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-02 18:05:16.399382 | orchestrator | Monday 02 June 2025 18:05:16 +0000 (0:00:05.228) 0:00:08.376 *********** 2025-06-02 18:05:16.602400 | orchestrator | skipping: [testbed-manager] 2025-06-02 18:05:16.692475 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:05:16.768400 | orchestrator | skipping: [testbed-node-1] 2025-06-02 18:05:16.854100 | orchestrator | skipping: [testbed-node-2] 2025-06-02 18:05:16.937344 | orchestrator | skipping: [testbed-node-3] 2025-06-02 18:05:16.982227 | orchestrator | skipping: [testbed-node-4] 2025-06-02 18:05:16.982687 | orchestrator | skipping: [testbed-node-5] 2025-06-02 18:05:16.983678 | orchestrator | 2025-06-02 18:05:16.984308 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 18:05:16.984892 | orchestrator | 2025-06-02 18:05:16 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 18:05:16.984940 | orchestrator | 2025-06-02 18:05:16 | INFO  | Please wait and do not abort execution. 2025-06-02 18:05:16.986802 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 18:05:16.988344 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 18:05:16.989361 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 18:05:16.989939 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 18:05:16.990794 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 18:05:16.991774 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 18:05:16.992280 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 18:05:16.992789 | orchestrator | 2025-06-02 18:05:16.993090 | orchestrator | 2025-06-02 18:05:16.993705 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 18:05:16.994046 | orchestrator | Monday 02 June 2025 18:05:16 +0000 (0:00:00.596) 0:00:08.972 *********** 2025-06-02 18:05:16.994350 | orchestrator | =============================================================================== 2025-06-02 18:05:16.994800 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.23s 2025-06-02 18:05:16.995303 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.54s 2025-06-02 18:05:16.997012 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.31s 2025-06-02 18:05:16.997264 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.60s 2025-06-02 18:05:17.761341 | orchestrator | + osism validate ceph-mons 2025-06-02 18:05:19.510630 | orchestrator | Registering Redlock._acquired_script 2025-06-02 18:05:19.510757 | orchestrator | Registering Redlock._extend_script 2025-06-02 18:05:19.510773 | orchestrator | Registering Redlock._release_script 2025-06-02 18:05:39.869750 | orchestrator | 2025-06-02 18:05:39.869969 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-06-02 18:05:39.869994 | orchestrator | 2025-06-02 18:05:39.870010 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-06-02 18:05:39.870092 | orchestrator | Monday 02 June 2025 18:05:24 +0000 (0:00:00.435) 0:00:00.435 *********** 2025-06-02 18:05:39.870109 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 18:05:39.870123 | orchestrator | 2025-06-02 18:05:39.870138 | orchestrator | TASK [Create report output directory] ****************************************** 2025-06-02 18:05:39.870152 | orchestrator | Monday 02 June 2025 18:05:24 +0000 (0:00:00.673) 0:00:01.108 *********** 2025-06-02 18:05:39.870185 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 18:05:39.870202 | orchestrator | 2025-06-02 18:05:39.870220 | orchestrator | TASK [Define report vars] ****************************************************** 2025-06-02 18:05:39.870237 | orchestrator | Monday 02 June 2025 18:05:25 +0000 (0:00:00.898) 0:00:02.007 *********** 2025-06-02 18:05:39.870256 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:05:39.870276 | orchestrator | 2025-06-02 18:05:39.870295 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-06-02 18:05:39.870313 | orchestrator | Monday 02 June 2025 18:05:25 +0000 (0:00:00.264) 0:00:02.271 *********** 2025-06-02 18:05:39.870331 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:05:39.870350 | orchestrator | ok: [testbed-node-1] 2025-06-02 18:05:39.870363 | orchestrator | ok: [testbed-node-2] 2025-06-02 18:05:39.870374 | orchestrator | 2025-06-02 18:05:39.870385 | orchestrator | TASK [Get container info] ****************************************************** 2025-06-02 18:05:39.870396 | orchestrator | Monday 02 June 2025 18:05:26 +0000 (0:00:00.311) 0:00:02.583 *********** 2025-06-02 18:05:39.870407 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:05:39.870418 | orchestrator | ok: [testbed-node-2] 2025-06-02 18:05:39.870429 | orchestrator | ok: [testbed-node-1] 2025-06-02 18:05:39.870440 | orchestrator | 2025-06-02 18:05:39.870451 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-06-02 18:05:39.870463 | orchestrator | Monday 02 June 2025 18:05:27 +0000 (0:00:00.971) 0:00:03.554 *********** 2025-06-02 18:05:39.870473 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:05:39.870485 | orchestrator | skipping: [testbed-node-1] 2025-06-02 18:05:39.870496 | orchestrator | skipping: [testbed-node-2] 2025-06-02 18:05:39.870507 | orchestrator | 2025-06-02 18:05:39.870518 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-06-02 18:05:39.870529 | orchestrator | Monday 02 June 2025 18:05:27 +0000 (0:00:00.278) 0:00:03.832 *********** 2025-06-02 18:05:39.870540 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:05:39.870551 | orchestrator | ok: [testbed-node-1] 2025-06-02 18:05:39.870562 | orchestrator | ok: [testbed-node-2] 2025-06-02 18:05:39.870573 | orchestrator | 2025-06-02 18:05:39.870584 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-02 18:05:39.870595 | orchestrator | Monday 02 June 2025 18:05:27 +0000 (0:00:00.518) 0:00:04.351 *********** 2025-06-02 18:05:39.870606 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:05:39.870617 | orchestrator | ok: [testbed-node-1] 2025-06-02 18:05:39.870627 | orchestrator | ok: [testbed-node-2] 2025-06-02 18:05:39.870663 | orchestrator | 2025-06-02 18:05:39.870676 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-06-02 18:05:39.870687 | orchestrator | Monday 02 June 2025 18:05:28 +0000 (0:00:00.337) 0:00:04.689 *********** 2025-06-02 18:05:39.870698 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:05:39.870709 | orchestrator | skipping: [testbed-node-1] 2025-06-02 18:05:39.870719 | orchestrator | skipping: [testbed-node-2] 2025-06-02 18:05:39.870730 | orchestrator | 2025-06-02 18:05:39.870741 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-06-02 18:05:39.870752 | orchestrator | Monday 02 June 2025 18:05:28 +0000 (0:00:00.312) 0:00:05.001 *********** 2025-06-02 18:05:39.870763 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:05:39.870773 | orchestrator | ok: [testbed-node-1] 2025-06-02 18:05:39.870784 | orchestrator | ok: [testbed-node-2] 2025-06-02 18:05:39.870795 | orchestrator | 2025-06-02 18:05:39.870806 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-02 18:05:39.870817 | orchestrator | Monday 02 June 2025 18:05:28 +0000 (0:00:00.373) 0:00:05.375 *********** 2025-06-02 18:05:39.870827 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:05:39.870838 | orchestrator | 2025-06-02 18:05:39.870917 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-02 18:05:39.870929 | orchestrator | Monday 02 June 2025 18:05:29 +0000 (0:00:00.721) 0:00:06.097 *********** 2025-06-02 18:05:39.870940 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:05:39.870951 | orchestrator | 2025-06-02 18:05:39.870961 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-02 18:05:39.870975 | orchestrator | Monday 02 June 2025 18:05:29 +0000 (0:00:00.276) 0:00:06.373 *********** 2025-06-02 18:05:39.870994 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:05:39.871019 | orchestrator | 2025-06-02 18:05:39.871228 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 18:05:39.871241 | orchestrator | Monday 02 June 2025 18:05:30 +0000 (0:00:00.247) 0:00:06.621 *********** 2025-06-02 18:05:39.871252 | orchestrator | 2025-06-02 18:05:39.871263 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 18:05:39.871273 | orchestrator | Monday 02 June 2025 18:05:30 +0000 (0:00:00.068) 0:00:06.689 *********** 2025-06-02 18:05:39.871284 | orchestrator | 2025-06-02 18:05:39.871294 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 18:05:39.871306 | orchestrator | Monday 02 June 2025 18:05:30 +0000 (0:00:00.091) 0:00:06.781 *********** 2025-06-02 18:05:39.871316 | orchestrator | 2025-06-02 18:05:39.871327 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-02 18:05:39.871338 | orchestrator | Monday 02 June 2025 18:05:30 +0000 (0:00:00.080) 0:00:06.862 *********** 2025-06-02 18:05:39.871349 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:05:39.871359 | orchestrator | 2025-06-02 18:05:39.871370 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-06-02 18:05:39.871381 | orchestrator | Monday 02 June 2025 18:05:30 +0000 (0:00:00.240) 0:00:07.102 *********** 2025-06-02 18:05:39.871391 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:05:39.871402 | orchestrator | 2025-06-02 18:05:39.871438 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-06-02 18:05:39.871450 | orchestrator | Monday 02 June 2025 18:05:30 +0000 (0:00:00.262) 0:00:07.364 *********** 2025-06-02 18:05:39.871461 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:05:39.871527 | orchestrator | 2025-06-02 18:05:39.871541 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-06-02 18:05:39.871552 | orchestrator | Monday 02 June 2025 18:05:31 +0000 (0:00:00.127) 0:00:07.492 *********** 2025-06-02 18:05:39.871564 | orchestrator | changed: [testbed-node-0] 2025-06-02 18:05:39.871575 | orchestrator | 2025-06-02 18:05:39.871586 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-06-02 18:05:39.871597 | orchestrator | Monday 02 June 2025 18:05:32 +0000 (0:00:01.513) 0:00:09.005 *********** 2025-06-02 18:05:39.871623 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:05:39.871634 | orchestrator | 2025-06-02 18:05:39.871645 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-06-02 18:05:39.871656 | orchestrator | Monday 02 June 2025 18:05:32 +0000 (0:00:00.313) 0:00:09.319 *********** 2025-06-02 18:05:39.871666 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:05:39.871677 | orchestrator | 2025-06-02 18:05:39.871744 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-06-02 18:05:39.871756 | orchestrator | Monday 02 June 2025 18:05:33 +0000 (0:00:00.367) 0:00:09.687 *********** 2025-06-02 18:05:39.871767 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:05:39.871777 | orchestrator | 2025-06-02 18:05:39.871788 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-06-02 18:05:39.871799 | orchestrator | Monday 02 June 2025 18:05:33 +0000 (0:00:00.340) 0:00:10.027 *********** 2025-06-02 18:05:39.871810 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:05:39.871821 | orchestrator | 2025-06-02 18:05:39.871832 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-06-02 18:05:39.871867 | orchestrator | Monday 02 June 2025 18:05:33 +0000 (0:00:00.322) 0:00:10.350 *********** 2025-06-02 18:05:39.871879 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:05:39.871890 | orchestrator | 2025-06-02 18:05:39.871900 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-06-02 18:05:39.871911 | orchestrator | Monday 02 June 2025 18:05:34 +0000 (0:00:00.129) 0:00:10.479 *********** 2025-06-02 18:05:39.871922 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:05:39.871933 | orchestrator | 2025-06-02 18:05:39.871944 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-06-02 18:05:39.871955 | orchestrator | Monday 02 June 2025 18:05:34 +0000 (0:00:00.119) 0:00:10.599 *********** 2025-06-02 18:05:39.871965 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:05:39.871976 | orchestrator | 2025-06-02 18:05:39.871987 | orchestrator | TASK [Gather status data] ****************************************************** 2025-06-02 18:05:39.871998 | orchestrator | Monday 02 June 2025 18:05:34 +0000 (0:00:00.136) 0:00:10.735 *********** 2025-06-02 18:05:39.872009 | orchestrator | changed: [testbed-node-0] 2025-06-02 18:05:39.872019 | orchestrator | 2025-06-02 18:05:39.872030 | orchestrator | TASK [Set health test data] **************************************************** 2025-06-02 18:05:39.872041 | orchestrator | Monday 02 June 2025 18:05:35 +0000 (0:00:01.352) 0:00:12.088 *********** 2025-06-02 18:05:39.872052 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:05:39.872063 | orchestrator | 2025-06-02 18:05:39.872074 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-06-02 18:05:39.872085 | orchestrator | Monday 02 June 2025 18:05:35 +0000 (0:00:00.298) 0:00:12.386 *********** 2025-06-02 18:05:39.872095 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:05:39.872106 | orchestrator | 2025-06-02 18:05:39.872117 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-06-02 18:05:39.872128 | orchestrator | Monday 02 June 2025 18:05:36 +0000 (0:00:00.180) 0:00:12.567 *********** 2025-06-02 18:05:39.872139 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:05:39.872149 | orchestrator | 2025-06-02 18:05:39.872160 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-06-02 18:05:39.872171 | orchestrator | Monday 02 June 2025 18:05:36 +0000 (0:00:00.162) 0:00:12.729 *********** 2025-06-02 18:05:39.872182 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:05:39.872192 | orchestrator | 2025-06-02 18:05:39.872203 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-06-02 18:05:39.872214 | orchestrator | Monday 02 June 2025 18:05:36 +0000 (0:00:00.187) 0:00:12.916 *********** 2025-06-02 18:05:39.872225 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:05:39.872248 | orchestrator | 2025-06-02 18:05:39.872270 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-06-02 18:05:39.872281 | orchestrator | Monday 02 June 2025 18:05:36 +0000 (0:00:00.367) 0:00:13.283 *********** 2025-06-02 18:05:39.872292 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 18:05:39.872330 | orchestrator | 2025-06-02 18:05:39.872352 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-06-02 18:05:39.872363 | orchestrator | Monday 02 June 2025 18:05:37 +0000 (0:00:00.289) 0:00:13.573 *********** 2025-06-02 18:05:39.872374 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:05:39.872384 | orchestrator | 2025-06-02 18:05:39.872395 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-02 18:05:39.872406 | orchestrator | Monday 02 June 2025 18:05:37 +0000 (0:00:00.247) 0:00:13.821 *********** 2025-06-02 18:05:39.872417 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 18:05:39.872428 | orchestrator | 2025-06-02 18:05:39.872439 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-02 18:05:39.872450 | orchestrator | Monday 02 June 2025 18:05:39 +0000 (0:00:01.682) 0:00:15.504 *********** 2025-06-02 18:05:39.872461 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 18:05:39.872472 | orchestrator | 2025-06-02 18:05:39.872483 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-02 18:05:39.872494 | orchestrator | Monday 02 June 2025 18:05:39 +0000 (0:00:00.289) 0:00:15.794 *********** 2025-06-02 18:05:39.872505 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 18:05:39.872521 | orchestrator | 2025-06-02 18:05:39.872557 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 18:05:42.406586 | orchestrator | Monday 02 June 2025 18:05:39 +0000 (0:00:00.247) 0:00:16.041 *********** 2025-06-02 18:05:42.407488 | orchestrator | 2025-06-02 18:05:42.407519 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 18:05:42.407533 | orchestrator | Monday 02 June 2025 18:05:39 +0000 (0:00:00.074) 0:00:16.116 *********** 2025-06-02 18:05:42.407540 | orchestrator | 2025-06-02 18:05:42.407546 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 18:05:42.407552 | orchestrator | Monday 02 June 2025 18:05:39 +0000 (0:00:00.091) 0:00:16.208 *********** 2025-06-02 18:05:42.407558 | orchestrator | 2025-06-02 18:05:42.407579 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-06-02 18:05:42.407585 | orchestrator | Monday 02 June 2025 18:05:39 +0000 (0:00:00.075) 0:00:16.283 *********** 2025-06-02 18:05:42.407592 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 18:05:42.407598 | orchestrator | 2025-06-02 18:05:42.407604 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-02 18:05:42.407610 | orchestrator | Monday 02 June 2025 18:05:41 +0000 (0:00:01.575) 0:00:17.858 *********** 2025-06-02 18:05:42.407615 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-06-02 18:05:42.407621 | orchestrator |  "msg": [ 2025-06-02 18:05:42.407628 | orchestrator |  "Validator run completed.", 2025-06-02 18:05:42.407635 | orchestrator |  "You can find the report file here:", 2025-06-02 18:05:42.407641 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-06-02T18:05:24+00:00-report.json", 2025-06-02 18:05:42.407647 | orchestrator |  "on the following host:", 2025-06-02 18:05:42.407653 | orchestrator |  "testbed-manager" 2025-06-02 18:05:42.407659 | orchestrator |  ] 2025-06-02 18:05:42.407665 | orchestrator | } 2025-06-02 18:05:42.407671 | orchestrator | 2025-06-02 18:05:42.407677 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 18:05:42.407684 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-06-02 18:05:42.407691 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 18:05:42.407697 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 18:05:42.407717 | orchestrator | 2025-06-02 18:05:42.407724 | orchestrator | 2025-06-02 18:05:42.407729 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 18:05:42.407735 | orchestrator | Monday 02 June 2025 18:05:42 +0000 (0:00:00.635) 0:00:18.494 *********** 2025-06-02 18:05:42.407741 | orchestrator | =============================================================================== 2025-06-02 18:05:42.407747 | orchestrator | Aggregate test results step one ----------------------------------------- 1.68s 2025-06-02 18:05:42.407753 | orchestrator | Write report file ------------------------------------------------------- 1.58s 2025-06-02 18:05:42.407758 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.51s 2025-06-02 18:05:42.407764 | orchestrator | Gather status data ------------------------------------------------------ 1.35s 2025-06-02 18:05:42.407770 | orchestrator | Get container info ------------------------------------------------------ 0.97s 2025-06-02 18:05:42.407775 | orchestrator | Create report output directory ------------------------------------------ 0.90s 2025-06-02 18:05:42.407781 | orchestrator | Aggregate test results step one ----------------------------------------- 0.72s 2025-06-02 18:05:42.407787 | orchestrator | Get timestamp for report file ------------------------------------------- 0.67s 2025-06-02 18:05:42.407793 | orchestrator | Print report file information ------------------------------------------- 0.64s 2025-06-02 18:05:42.407798 | orchestrator | Set test result to passed if container is existing ---------------------- 0.52s 2025-06-02 18:05:42.407804 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.37s 2025-06-02 18:05:42.407810 | orchestrator | Fail quorum test if not all monitors are in quorum ---------------------- 0.37s 2025-06-02 18:05:42.407816 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.37s 2025-06-02 18:05:42.407822 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.34s 2025-06-02 18:05:42.407828 | orchestrator | Prepare test data ------------------------------------------------------- 0.34s 2025-06-02 18:05:42.407833 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.32s 2025-06-02 18:05:42.407868 | orchestrator | Set quorum test data ---------------------------------------------------- 0.31s 2025-06-02 18:05:42.407875 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.31s 2025-06-02 18:05:42.407881 | orchestrator | Prepare test data for container existance test -------------------------- 0.31s 2025-06-02 18:05:42.407886 | orchestrator | Set health test data ---------------------------------------------------- 0.30s 2025-06-02 18:05:42.683978 | orchestrator | + osism validate ceph-mgrs 2025-06-02 18:05:44.635198 | orchestrator | Registering Redlock._acquired_script 2025-06-02 18:05:44.635300 | orchestrator | Registering Redlock._extend_script 2025-06-02 18:05:44.635314 | orchestrator | Registering Redlock._release_script 2025-06-02 18:06:04.425764 | orchestrator | 2025-06-02 18:06:04.425974 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-06-02 18:06:04.426009 | orchestrator | 2025-06-02 18:06:04.426105 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-06-02 18:06:04.426125 | orchestrator | Monday 02 June 2025 18:05:49 +0000 (0:00:00.458) 0:00:00.458 *********** 2025-06-02 18:06:04.426143 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 18:06:04.426158 | orchestrator | 2025-06-02 18:06:04.426178 | orchestrator | TASK [Create report output directory] ****************************************** 2025-06-02 18:06:04.426198 | orchestrator | Monday 02 June 2025 18:05:49 +0000 (0:00:00.626) 0:00:01.084 *********** 2025-06-02 18:06:04.426215 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 18:06:04.426233 | orchestrator | 2025-06-02 18:06:04.426251 | orchestrator | TASK [Define report vars] ****************************************************** 2025-06-02 18:06:04.426270 | orchestrator | Monday 02 June 2025 18:05:50 +0000 (0:00:00.855) 0:00:01.939 *********** 2025-06-02 18:06:04.426289 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:06:04.426311 | orchestrator | 2025-06-02 18:06:04.426330 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-06-02 18:06:04.426403 | orchestrator | Monday 02 June 2025 18:05:50 +0000 (0:00:00.262) 0:00:02.201 *********** 2025-06-02 18:06:04.426419 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:06:04.426430 | orchestrator | ok: [testbed-node-1] 2025-06-02 18:06:04.426441 | orchestrator | ok: [testbed-node-2] 2025-06-02 18:06:04.426453 | orchestrator | 2025-06-02 18:06:04.426464 | orchestrator | TASK [Get container info] ****************************************************** 2025-06-02 18:06:04.426475 | orchestrator | Monday 02 June 2025 18:05:51 +0000 (0:00:00.320) 0:00:02.522 *********** 2025-06-02 18:06:04.426486 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:06:04.426497 | orchestrator | ok: [testbed-node-2] 2025-06-02 18:06:04.426508 | orchestrator | ok: [testbed-node-1] 2025-06-02 18:06:04.426519 | orchestrator | 2025-06-02 18:06:04.426535 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-06-02 18:06:04.426553 | orchestrator | Monday 02 June 2025 18:05:52 +0000 (0:00:01.019) 0:00:03.542 *********** 2025-06-02 18:06:04.426572 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:06:04.426591 | orchestrator | skipping: [testbed-node-1] 2025-06-02 18:06:04.426605 | orchestrator | skipping: [testbed-node-2] 2025-06-02 18:06:04.426616 | orchestrator | 2025-06-02 18:06:04.426627 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-06-02 18:06:04.426638 | orchestrator | Monday 02 June 2025 18:05:52 +0000 (0:00:00.303) 0:00:03.845 *********** 2025-06-02 18:06:04.426648 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:06:04.426659 | orchestrator | ok: [testbed-node-1] 2025-06-02 18:06:04.426670 | orchestrator | ok: [testbed-node-2] 2025-06-02 18:06:04.426681 | orchestrator | 2025-06-02 18:06:04.426692 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-02 18:06:04.426703 | orchestrator | Monday 02 June 2025 18:05:53 +0000 (0:00:00.544) 0:00:04.390 *********** 2025-06-02 18:06:04.426713 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:06:04.426724 | orchestrator | ok: [testbed-node-1] 2025-06-02 18:06:04.426735 | orchestrator | ok: [testbed-node-2] 2025-06-02 18:06:04.426745 | orchestrator | 2025-06-02 18:06:04.426756 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-06-02 18:06:04.426767 | orchestrator | Monday 02 June 2025 18:05:53 +0000 (0:00:00.353) 0:00:04.743 *********** 2025-06-02 18:06:04.426778 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:06:04.426789 | orchestrator | skipping: [testbed-node-1] 2025-06-02 18:06:04.426800 | orchestrator | skipping: [testbed-node-2] 2025-06-02 18:06:04.426810 | orchestrator | 2025-06-02 18:06:04.426864 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-06-02 18:06:04.426877 | orchestrator | Monday 02 June 2025 18:05:53 +0000 (0:00:00.286) 0:00:05.029 *********** 2025-06-02 18:06:04.426888 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:06:04.426899 | orchestrator | ok: [testbed-node-1] 2025-06-02 18:06:04.426910 | orchestrator | ok: [testbed-node-2] 2025-06-02 18:06:04.426921 | orchestrator | 2025-06-02 18:06:04.426932 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-02 18:06:04.426943 | orchestrator | Monday 02 June 2025 18:05:54 +0000 (0:00:00.322) 0:00:05.352 *********** 2025-06-02 18:06:04.426955 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:06:04.426973 | orchestrator | 2025-06-02 18:06:04.426991 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-02 18:06:04.427008 | orchestrator | Monday 02 June 2025 18:05:54 +0000 (0:00:00.719) 0:00:06.071 *********** 2025-06-02 18:06:04.427028 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:06:04.427047 | orchestrator | 2025-06-02 18:06:04.427066 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-02 18:06:04.427084 | orchestrator | Monday 02 June 2025 18:05:55 +0000 (0:00:00.278) 0:00:06.350 *********** 2025-06-02 18:06:04.427095 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:06:04.427106 | orchestrator | 2025-06-02 18:06:04.427117 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 18:06:04.427128 | orchestrator | Monday 02 June 2025 18:05:55 +0000 (0:00:00.296) 0:00:06.646 *********** 2025-06-02 18:06:04.427150 | orchestrator | 2025-06-02 18:06:04.427162 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 18:06:04.427173 | orchestrator | Monday 02 June 2025 18:05:55 +0000 (0:00:00.088) 0:00:06.734 *********** 2025-06-02 18:06:04.427184 | orchestrator | 2025-06-02 18:06:04.427195 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 18:06:04.427205 | orchestrator | Monday 02 June 2025 18:05:55 +0000 (0:00:00.084) 0:00:06.819 *********** 2025-06-02 18:06:04.427216 | orchestrator | 2025-06-02 18:06:04.427227 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-02 18:06:04.427237 | orchestrator | Monday 02 June 2025 18:05:55 +0000 (0:00:00.073) 0:00:06.893 *********** 2025-06-02 18:06:04.427248 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:06:04.427259 | orchestrator | 2025-06-02 18:06:04.427269 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-06-02 18:06:04.427280 | orchestrator | Monday 02 June 2025 18:05:55 +0000 (0:00:00.329) 0:00:07.222 *********** 2025-06-02 18:06:04.427291 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:06:04.427302 | orchestrator | 2025-06-02 18:06:04.427336 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-06-02 18:06:04.427348 | orchestrator | Monday 02 June 2025 18:05:56 +0000 (0:00:00.266) 0:00:07.489 *********** 2025-06-02 18:06:04.427358 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:06:04.427369 | orchestrator | 2025-06-02 18:06:04.427380 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-06-02 18:06:04.427391 | orchestrator | Monday 02 June 2025 18:05:56 +0000 (0:00:00.121) 0:00:07.611 *********** 2025-06-02 18:06:04.427402 | orchestrator | changed: [testbed-node-0] 2025-06-02 18:06:04.427413 | orchestrator | 2025-06-02 18:06:04.427424 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-06-02 18:06:04.427435 | orchestrator | Monday 02 June 2025 18:05:58 +0000 (0:00:01.961) 0:00:09.572 *********** 2025-06-02 18:06:04.427446 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:06:04.427456 | orchestrator | 2025-06-02 18:06:04.427467 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-06-02 18:06:04.427478 | orchestrator | Monday 02 June 2025 18:05:58 +0000 (0:00:00.260) 0:00:09.833 *********** 2025-06-02 18:06:04.427489 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:06:04.427500 | orchestrator | 2025-06-02 18:06:04.427511 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-06-02 18:06:04.427522 | orchestrator | Monday 02 June 2025 18:05:59 +0000 (0:00:00.812) 0:00:10.646 *********** 2025-06-02 18:06:04.427533 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:06:04.427544 | orchestrator | 2025-06-02 18:06:04.427555 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-06-02 18:06:04.427566 | orchestrator | Monday 02 June 2025 18:05:59 +0000 (0:00:00.150) 0:00:10.796 *********** 2025-06-02 18:06:04.427577 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:06:04.427588 | orchestrator | 2025-06-02 18:06:04.427599 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-06-02 18:06:04.427610 | orchestrator | Monday 02 June 2025 18:05:59 +0000 (0:00:00.165) 0:00:10.962 *********** 2025-06-02 18:06:04.427621 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 18:06:04.427631 | orchestrator | 2025-06-02 18:06:04.427643 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-06-02 18:06:04.427653 | orchestrator | Monday 02 June 2025 18:05:59 +0000 (0:00:00.253) 0:00:11.215 *********** 2025-06-02 18:06:04.427664 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:06:04.427675 | orchestrator | 2025-06-02 18:06:04.427686 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-02 18:06:04.427697 | orchestrator | Monday 02 June 2025 18:06:00 +0000 (0:00:00.257) 0:00:11.473 *********** 2025-06-02 18:06:04.427708 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 18:06:04.427726 | orchestrator | 2025-06-02 18:06:04.427736 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-02 18:06:04.427747 | orchestrator | Monday 02 June 2025 18:06:01 +0000 (0:00:01.293) 0:00:12.766 *********** 2025-06-02 18:06:04.427758 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 18:06:04.427769 | orchestrator | 2025-06-02 18:06:04.427780 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-02 18:06:04.427791 | orchestrator | Monday 02 June 2025 18:06:01 +0000 (0:00:00.263) 0:00:13.030 *********** 2025-06-02 18:06:04.427802 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 18:06:04.427813 | orchestrator | 2025-06-02 18:06:04.427880 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 18:06:04.427892 | orchestrator | Monday 02 June 2025 18:06:01 +0000 (0:00:00.252) 0:00:13.283 *********** 2025-06-02 18:06:04.427903 | orchestrator | 2025-06-02 18:06:04.427914 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 18:06:04.427925 | orchestrator | Monday 02 June 2025 18:06:02 +0000 (0:00:00.070) 0:00:13.354 *********** 2025-06-02 18:06:04.427935 | orchestrator | 2025-06-02 18:06:04.427946 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 18:06:04.427957 | orchestrator | Monday 02 June 2025 18:06:02 +0000 (0:00:00.069) 0:00:13.424 *********** 2025-06-02 18:06:04.427968 | orchestrator | 2025-06-02 18:06:04.427979 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-06-02 18:06:04.427990 | orchestrator | Monday 02 June 2025 18:06:02 +0000 (0:00:00.071) 0:00:13.495 *********** 2025-06-02 18:06:04.428001 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 18:06:04.428019 | orchestrator | 2025-06-02 18:06:04.428038 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-02 18:06:04.428057 | orchestrator | Monday 02 June 2025 18:06:03 +0000 (0:00:01.782) 0:00:15.277 *********** 2025-06-02 18:06:04.428077 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-06-02 18:06:04.428097 | orchestrator |  "msg": [ 2025-06-02 18:06:04.428116 | orchestrator |  "Validator run completed.", 2025-06-02 18:06:04.428135 | orchestrator |  "You can find the report file here:", 2025-06-02 18:06:04.428148 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-06-02T18:05:49+00:00-report.json", 2025-06-02 18:06:04.428172 | orchestrator |  "on the following host:", 2025-06-02 18:06:04.428184 | orchestrator |  "testbed-manager" 2025-06-02 18:06:04.428195 | orchestrator |  ] 2025-06-02 18:06:04.428206 | orchestrator | } 2025-06-02 18:06:04.428217 | orchestrator | 2025-06-02 18:06:04.428228 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 18:06:04.428241 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-02 18:06:04.428253 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 18:06:04.428274 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 18:06:04.763933 | orchestrator | 2025-06-02 18:06:04.764058 | orchestrator | 2025-06-02 18:06:04.764081 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 18:06:04.764099 | orchestrator | Monday 02 June 2025 18:06:04 +0000 (0:00:00.425) 0:00:15.703 *********** 2025-06-02 18:06:04.764113 | orchestrator | =============================================================================== 2025-06-02 18:06:04.764125 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.96s 2025-06-02 18:06:04.764134 | orchestrator | Write report file ------------------------------------------------------- 1.78s 2025-06-02 18:06:04.764142 | orchestrator | Aggregate test results step one ----------------------------------------- 1.29s 2025-06-02 18:06:04.764176 | orchestrator | Get container info ------------------------------------------------------ 1.02s 2025-06-02 18:06:04.764185 | orchestrator | Create report output directory ------------------------------------------ 0.86s 2025-06-02 18:06:04.764193 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.81s 2025-06-02 18:06:04.764214 | orchestrator | Aggregate test results step one ----------------------------------------- 0.72s 2025-06-02 18:06:04.764222 | orchestrator | Get timestamp for report file ------------------------------------------- 0.63s 2025-06-02 18:06:04.764230 | orchestrator | Set test result to passed if container is existing ---------------------- 0.54s 2025-06-02 18:06:04.764238 | orchestrator | Print report file information ------------------------------------------- 0.43s 2025-06-02 18:06:04.764246 | orchestrator | Prepare test data ------------------------------------------------------- 0.35s 2025-06-02 18:06:04.764254 | orchestrator | Print report file information ------------------------------------------- 0.33s 2025-06-02 18:06:04.764261 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.32s 2025-06-02 18:06:04.764269 | orchestrator | Prepare test data for container existance test -------------------------- 0.32s 2025-06-02 18:06:04.764277 | orchestrator | Set test result to failed if container is missing ----------------------- 0.30s 2025-06-02 18:06:04.764285 | orchestrator | Aggregate test results step three --------------------------------------- 0.30s 2025-06-02 18:06:04.764292 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.29s 2025-06-02 18:06:04.764300 | orchestrator | Aggregate test results step two ----------------------------------------- 0.28s 2025-06-02 18:06:04.764308 | orchestrator | Fail due to missing containers ------------------------------------------ 0.27s 2025-06-02 18:06:04.764316 | orchestrator | Aggregate test results step two ----------------------------------------- 0.26s 2025-06-02 18:06:05.017125 | orchestrator | + osism validate ceph-osds 2025-06-02 18:06:06.737066 | orchestrator | Registering Redlock._acquired_script 2025-06-02 18:06:06.737171 | orchestrator | Registering Redlock._extend_script 2025-06-02 18:06:06.737186 | orchestrator | Registering Redlock._release_script 2025-06-02 18:06:15.866289 | orchestrator | 2025-06-02 18:06:15.866380 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-06-02 18:06:15.866391 | orchestrator | 2025-06-02 18:06:15.866398 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-06-02 18:06:15.866405 | orchestrator | Monday 02 June 2025 18:06:11 +0000 (0:00:00.460) 0:00:00.460 *********** 2025-06-02 18:06:15.866425 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 18:06:15.866432 | orchestrator | 2025-06-02 18:06:15.866438 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-02 18:06:15.866445 | orchestrator | Monday 02 June 2025 18:06:11 +0000 (0:00:00.692) 0:00:01.153 *********** 2025-06-02 18:06:15.866451 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 18:06:15.866465 | orchestrator | 2025-06-02 18:06:15.866472 | orchestrator | TASK [Create report output directory] ****************************************** 2025-06-02 18:06:15.866478 | orchestrator | Monday 02 June 2025 18:06:12 +0000 (0:00:00.456) 0:00:01.609 *********** 2025-06-02 18:06:15.866485 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 18:06:15.866491 | orchestrator | 2025-06-02 18:06:15.866497 | orchestrator | TASK [Define report vars] ****************************************************** 2025-06-02 18:06:15.866503 | orchestrator | Monday 02 June 2025 18:06:13 +0000 (0:00:00.976) 0:00:02.586 *********** 2025-06-02 18:06:15.866510 | orchestrator | ok: [testbed-node-3] 2025-06-02 18:06:15.866517 | orchestrator | 2025-06-02 18:06:15.866523 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-06-02 18:06:15.866530 | orchestrator | Monday 02 June 2025 18:06:13 +0000 (0:00:00.150) 0:00:02.736 *********** 2025-06-02 18:06:15.866536 | orchestrator | skipping: [testbed-node-3] 2025-06-02 18:06:15.866542 | orchestrator | 2025-06-02 18:06:15.866548 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-06-02 18:06:15.866555 | orchestrator | Monday 02 June 2025 18:06:13 +0000 (0:00:00.122) 0:00:02.858 *********** 2025-06-02 18:06:15.866579 | orchestrator | skipping: [testbed-node-3] 2025-06-02 18:06:15.866586 | orchestrator | skipping: [testbed-node-4] 2025-06-02 18:06:15.866592 | orchestrator | skipping: [testbed-node-5] 2025-06-02 18:06:15.866598 | orchestrator | 2025-06-02 18:06:15.866604 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-06-02 18:06:15.866611 | orchestrator | Monday 02 June 2025 18:06:13 +0000 (0:00:00.312) 0:00:03.170 *********** 2025-06-02 18:06:15.866617 | orchestrator | ok: [testbed-node-3] 2025-06-02 18:06:15.866623 | orchestrator | 2025-06-02 18:06:15.866629 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-06-02 18:06:15.866636 | orchestrator | Monday 02 June 2025 18:06:14 +0000 (0:00:00.149) 0:00:03.320 *********** 2025-06-02 18:06:15.866642 | orchestrator | ok: [testbed-node-3] 2025-06-02 18:06:15.866649 | orchestrator | ok: [testbed-node-4] 2025-06-02 18:06:15.866660 | orchestrator | ok: [testbed-node-5] 2025-06-02 18:06:15.866670 | orchestrator | 2025-06-02 18:06:15.866681 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-06-02 18:06:15.866691 | orchestrator | Monday 02 June 2025 18:06:14 +0000 (0:00:00.326) 0:00:03.647 *********** 2025-06-02 18:06:15.866700 | orchestrator | ok: [testbed-node-3] 2025-06-02 18:06:15.866710 | orchestrator | 2025-06-02 18:06:15.866720 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-02 18:06:15.866730 | orchestrator | Monday 02 June 2025 18:06:15 +0000 (0:00:00.618) 0:00:04.265 *********** 2025-06-02 18:06:15.866740 | orchestrator | ok: [testbed-node-3] 2025-06-02 18:06:15.866751 | orchestrator | ok: [testbed-node-4] 2025-06-02 18:06:15.866761 | orchestrator | ok: [testbed-node-5] 2025-06-02 18:06:15.866772 | orchestrator | 2025-06-02 18:06:15.866782 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-06-02 18:06:15.866792 | orchestrator | Monday 02 June 2025 18:06:15 +0000 (0:00:00.582) 0:00:04.848 *********** 2025-06-02 18:06:15.866848 | orchestrator | skipping: [testbed-node-3] => (item={'id': '2d1879ef17e0bceed2eb7f2a27af59d28f89d64084a149350d5dfb79ec5966de', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-06-02 18:06:15.866874 | orchestrator | skipping: [testbed-node-3] => (item={'id': '96895573b25806108859a9aafebf273d4245a1b2367ba087169bd03fe8517866', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-06-02 18:06:15.866882 | orchestrator | skipping: [testbed-node-3] => (item={'id': '602f9091e044b617aaf63d7df4383b827c1c775d6ae709a69f811f7117e55ba7', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-06-02 18:06:15.866890 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b72474d783d14e9d2525b8a04d0cb063eaee91b15e6a66c52221af45d37831e7', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-06-02 18:06:15.866899 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9145ab670106a426dfb45c3869e47955f32f6995aa326fc23bbfadb617ad8c53', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-06-02 18:06:15.866929 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b33c2a2013fdc1275b3519e4a1e67b76e767be5ea278b544eb9dc17b4c3ddc4b', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-06-02 18:06:15.866940 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5d9a99660e5f58229f2d124f61e456a3bad2a2aabbbd2f7b2e86cabcb9c001a1', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-06-02 18:06:15.866970 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd3dd0ba0c4591d605fb3f297287451f53c308528b26a4de1b0c3492014e3bbe4', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2025-06-02 18:06:15.866982 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd6145fb1fcf7815da2ee95daec27e898e8c9aa9aac17d665fd18690d2fb51825', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-06-02 18:06:15.866994 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4fd3e568504f264602e87b02678e72bb6e90d8151450070a4eae5cb9435d289c', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2025-06-02 18:06:15.867006 | orchestrator | skipping: [testbed-node-3] => (item={'id': '48dc89e062c6498aea12715be5a18b62554c1f3875c70b53672f10819bcda9bf', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 22 minutes'})  2025-06-02 18:06:15.867018 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1094f9206054afa1a56d269d7ad0fb31f65e57a1b3a4c45585efc498f4e3642d', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 23 minutes'})  2025-06-02 18:06:15.867029 | orchestrator | ok: [testbed-node-3] => (item={'id': '3db38ca426aa14559b8cc990de79a49f176da223e7e73422387b280fe4219fd6', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-06-02 18:06:15.867037 | orchestrator | ok: [testbed-node-3] => (item={'id': '605e4af29d85b08de1e159027e315f0932f5be6ef20e6d645914ac0487e64583', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-06-02 18:06:15.867045 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6c738177b2344186e7759cb55cfc12c8211377dc315e4c3cb1721c9e93559991', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 27 minutes'})  2025-06-02 18:06:15.867052 | orchestrator | skipping: [testbed-node-3] => (item={'id': '771a621e6e2f7e43825fcd7eb524344ee13cb6acc566d08e56300a9237b7c134', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-06-02 18:06:15.867061 | orchestrator | skipping: [testbed-node-3] => (item={'id': '05b32a99502f81bb66928c3cbfbf92c882ae92dddec48d43d6d9625f027eb8b8', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-06-02 18:06:15.867069 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd594d4eb092a44b259123a194842d9fca9bf6423c1fd4c021dbf3756d97c7c91', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2025-06-02 18:06:15.867075 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0e690e52a13ebea13dc56651b83bb5220e0291f16bb959c45e52ca626a6bee40', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 30 minutes'})  2025-06-02 18:06:15.867082 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd99af1e2ac541b9b422c1dab97c86e0d80e3895d436794a771b20a40042a6184', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'name': '/fluentd', 'state': 'running', 'status': 'Up 30 minutes'})  2025-06-02 18:06:15.867092 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e2c42f1e7bb9aa3249a5c25d4b794e86916d086df713da7fd8d97d46e1584402', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-06-02 18:06:16.034318 | orchestrator | skipping: [testbed-node-4] => (item={'id': '957b0944bd5df9be3686e90498bf9c11944f0d437bb17f3a37847ccf7d921c33', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-06-02 18:06:16.034401 | orchestrator | skipping: [testbed-node-4] => (item={'id': '26012fc8c6991d5a30f3fc4e9cbb5ea9efcf3eb23ed781317e803a48f85fe10b', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-06-02 18:06:16.034411 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7bacd590997b28e7dc9380d8f8fa4730c4698deea9fd14188c60e82a6fbe81a8', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-06-02 18:06:16.034420 | orchestrator | skipping: [testbed-node-4] => (item={'id': '65548deceef6970e7db3f3f542d3f2c02a0747fd74b3cd9db974c493ffd04d8a', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-06-02 18:06:16.034427 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3d12fdb1ad3cbf63c34570de0e3a53d694955c99088ccb9fbda77ed13d6f002f', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-06-02 18:06:16.034433 | orchestrator | skipping: [testbed-node-4] => (item={'id': '4dc309446127c5917d20e0e96d27768b4a6169d3314af49dee59568682e1b29f', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-06-02 18:06:16.034457 | orchestrator | skipping: [testbed-node-4] => (item={'id': '57ba5d1a2dcbb00f79b87485bfaa17559b183cff2916f633103ac4dc9adfcd30', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2025-06-02 18:06:16.034464 | orchestrator | skipping: [testbed-node-4] => (item={'id': '929136ff5eaa1b59563ac54901c96d3d4e15b6343c2031a09e5d5c133f83cea5', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-06-02 18:06:16.034471 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a5a2a4b889adcaedbb399b6267a19d27a243dbb482332fd9d07d6a3366d16a7e', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2025-06-02 18:06:16.034484 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9c2bf9dd8025335ea78f356c592e052aaf01006662de1b0346176ef34d19a130', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 22 minutes'})  2025-06-02 18:06:16.034496 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5cf1e989f826cf51c79df1b6dbd5c8db3737cfbdadb54739de06e48765636ebf', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 23 minutes'})  2025-06-02 18:06:16.034508 | orchestrator | ok: [testbed-node-4] => (item={'id': '7aedd81a78b26121bca1081cac4b72f4a61638d37d2926646c88ba0a140482eb', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-06-02 18:06:16.034521 | orchestrator | ok: [testbed-node-4] => (item={'id': 'b626410b8ab95680907969cd594706c39457d8e04eb64ccc74fb6e98477068ce', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-06-02 18:06:16.034554 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'cb40d1f57322c911d5d78e00bc42a9e8587aeacaa678329c0ea143e5c6d9a6bd', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 27 minutes'})  2025-06-02 18:06:16.034583 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7231f90116ac69ab564ca43fdc7111a96c940da95dcb0c762b9d9ce0c68b1526', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-06-02 18:06:16.034595 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f72d651429eb8ecd9f31df69f4fbd92d3872ad01c8bd31e09d0b0661f3b3f65e', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-06-02 18:06:16.034602 | orchestrator | skipping: [testbed-node-4] => (item={'id': '4feb325652c387e31ce14b153c396423797de29f431264d56042afb10fd010c8', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2025-06-02 18:06:16.034609 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8781370a7811efd4e2deb2d50cbd9d4d0a2f3cd62868ba6b405a6c0144400922', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 30 minutes'})  2025-06-02 18:06:16.034616 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'dabfde80c3f7f917a020a069e9de4490e21a0f7f01787b56cd8ce294f4c4c344', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'name': '/fluentd', 'state': 'running', 'status': 'Up 30 minutes'})  2025-06-02 18:06:16.034622 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'fc7e9b9139b9e73745de48b411a28cfc347a012fd20534ef8ef98be184383ec3', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-06-02 18:06:16.034628 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6aac9952e83c34c31223659978df27ede6475357bd134129a29844d99120f20c', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-06-02 18:06:16.034635 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5daf79ccec24b10f1c604f70c8c11a41e49c0160d63df761794c9c377e1e80df', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-06-02 18:06:16.034641 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f7783293fcb788a3763b725cf840381e83d63366b803f674362c379c3e0a23e6', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-06-02 18:06:16.034658 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c464a0373bd5c9bf96fe76a40dfe874e401c939633fbedff402da4c001b09fa3', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-06-02 18:06:16.034669 | orchestrator | skipping: [testbed-node-5] => (item={'id': '95d0acf3f648cbd02b2946897f775a399e2a2a5525bc4fc57dca0863d8651754', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-06-02 18:06:16.034684 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a3ea2607b644991cca08b1fbfe5f100419e13c7a8c0f3d65f85418514749a0f2', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-06-02 18:06:16.034696 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1be49b0bb2381cede04b619daf31b8b1b555271650b289150aa941045afd1967', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2025-06-02 18:06:16.034703 | orchestrator | skipping: [testbed-node-5] => (item={'id': '92297924e6c1724ff3d18e06625d6061f81de6ca0e4fa25fe1e2797a5f5ba024', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-06-02 18:06:16.034710 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ace2e0c3a3ebcaa4d390ac1503fb7dc0387626c219ce8b1eebec3536e5c06c5d', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2025-06-02 18:06:16.034721 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b7249473aed1751bc3ac96d2ecf6981d3e9da2fda39001d752a17844a6d5bf77', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 22 minutes'})  2025-06-02 18:06:24.903141 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8b8a42b31372b5c4e47c474bf84b5fe678e84b6acdf696f49eafd18e1bb3757f', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 23 minutes'})  2025-06-02 18:06:24.903237 | orchestrator | ok: [testbed-node-5] => (item={'id': '4ab4770c58f66c1af1e9341bd4ec399992c26590407fbedda2ec12a9569b8a59', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-06-02 18:06:24.903250 | orchestrator | ok: [testbed-node-5] => (item={'id': 'fd4ffccffa79c37b002f21e1f5a931166019482b7d1d7787fc6a3edc2925c86c', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-06-02 18:06:24.903260 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f47d2644497d6bbca6d7d446a090bdb663bfc55763d603c4c8b03e4fe2a0d59b', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 27 minutes'})  2025-06-02 18:06:24.903269 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6bcf9692e352c092d6c8ed153b8507ab58d0478aa2369a7e7f1bca28572e8d00', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-06-02 18:06:24.903280 | orchestrator | skipping: [testbed-node-5] => (item={'id': '054376ac47cb89e0fa6e66a680bbe885d71aac223d1dfac0680304c90b2e207d', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-06-02 18:06:24.903288 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b44f554adb4a4415be90ac53b4eb6b2e9e1d4cd806da3a299d1bc4ba46412572', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2025-06-02 18:06:24.903297 | orchestrator | skipping: [testbed-node-5] => (item={'id': '90eee9676e93afa6861fee51c1a4318220ab0b16c0a6bc08ba2738d4052697ee', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 30 minutes'})  2025-06-02 18:06:24.903306 | orchestrator | skipping: [testbed-node-5] => (item={'id': '323453faac07050d62ca65742beb1326351851d249f904539cdf3ecdcc76af1a', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'name': '/fluentd', 'state': 'running', 'status': 'Up 30 minutes'})  2025-06-02 18:06:24.903314 | orchestrator | 2025-06-02 18:06:24.903324 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-06-02 18:06:24.903356 | orchestrator | Monday 02 June 2025 18:06:16 +0000 (0:00:00.525) 0:00:05.373 *********** 2025-06-02 18:06:24.903365 | orchestrator | ok: [testbed-node-3] 2025-06-02 18:06:24.903388 | orchestrator | ok: [testbed-node-4] 2025-06-02 18:06:24.903396 | orchestrator | ok: [testbed-node-5] 2025-06-02 18:06:24.903404 | orchestrator | 2025-06-02 18:06:24.903412 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-06-02 18:06:24.903420 | orchestrator | Monday 02 June 2025 18:06:16 +0000 (0:00:00.325) 0:00:05.699 *********** 2025-06-02 18:06:24.903428 | orchestrator | skipping: [testbed-node-3] 2025-06-02 18:06:24.903437 | orchestrator | skipping: [testbed-node-4] 2025-06-02 18:06:24.903444 | orchestrator | skipping: [testbed-node-5] 2025-06-02 18:06:24.903452 | orchestrator | 2025-06-02 18:06:24.903461 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-06-02 18:06:24.903469 | orchestrator | Monday 02 June 2025 18:06:16 +0000 (0:00:00.513) 0:00:06.213 *********** 2025-06-02 18:06:24.903477 | orchestrator | ok: [testbed-node-3] 2025-06-02 18:06:24.903485 | orchestrator | ok: [testbed-node-4] 2025-06-02 18:06:24.903492 | orchestrator | ok: [testbed-node-5] 2025-06-02 18:06:24.903500 | orchestrator | 2025-06-02 18:06:24.903508 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-02 18:06:24.903516 | orchestrator | Monday 02 June 2025 18:06:17 +0000 (0:00:00.345) 0:00:06.559 *********** 2025-06-02 18:06:24.903524 | orchestrator | ok: [testbed-node-3] 2025-06-02 18:06:24.903532 | orchestrator | ok: [testbed-node-4] 2025-06-02 18:06:24.903539 | orchestrator | ok: [testbed-node-5] 2025-06-02 18:06:24.903546 | orchestrator | 2025-06-02 18:06:24.903554 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-06-02 18:06:24.903561 | orchestrator | Monday 02 June 2025 18:06:17 +0000 (0:00:00.307) 0:00:06.866 *********** 2025-06-02 18:06:24.903569 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-06-02 18:06:24.903578 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-06-02 18:06:24.903586 | orchestrator | skipping: [testbed-node-3] 2025-06-02 18:06:24.903594 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-06-02 18:06:24.903602 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-06-02 18:06:24.903624 | orchestrator | skipping: [testbed-node-4] 2025-06-02 18:06:24.903632 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-06-02 18:06:24.903640 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-06-02 18:06:24.903648 | orchestrator | skipping: [testbed-node-5] 2025-06-02 18:06:24.903656 | orchestrator | 2025-06-02 18:06:24.903665 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-06-02 18:06:24.903672 | orchestrator | Monday 02 June 2025 18:06:17 +0000 (0:00:00.343) 0:00:07.209 *********** 2025-06-02 18:06:24.903680 | orchestrator | ok: [testbed-node-3] 2025-06-02 18:06:24.903688 | orchestrator | ok: [testbed-node-4] 2025-06-02 18:06:24.903697 | orchestrator | ok: [testbed-node-5] 2025-06-02 18:06:24.903705 | orchestrator | 2025-06-02 18:06:24.903713 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-06-02 18:06:24.903721 | orchestrator | Monday 02 June 2025 18:06:18 +0000 (0:00:00.549) 0:00:07.759 *********** 2025-06-02 18:06:24.903729 | orchestrator | skipping: [testbed-node-3] 2025-06-02 18:06:24.903737 | orchestrator | skipping: [testbed-node-4] 2025-06-02 18:06:24.903745 | orchestrator | skipping: [testbed-node-5] 2025-06-02 18:06:24.903753 | orchestrator | 2025-06-02 18:06:24.903762 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-06-02 18:06:24.903771 | orchestrator | Monday 02 June 2025 18:06:18 +0000 (0:00:00.306) 0:00:08.066 *********** 2025-06-02 18:06:24.903780 | orchestrator | skipping: [testbed-node-3] 2025-06-02 18:06:24.903794 | orchestrator | skipping: [testbed-node-4] 2025-06-02 18:06:24.903830 | orchestrator | skipping: [testbed-node-5] 2025-06-02 18:06:24.903838 | orchestrator | 2025-06-02 18:06:24.903844 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-06-02 18:06:24.903849 | orchestrator | Monday 02 June 2025 18:06:19 +0000 (0:00:00.328) 0:00:08.394 *********** 2025-06-02 18:06:24.903855 | orchestrator | ok: [testbed-node-3] 2025-06-02 18:06:24.903861 | orchestrator | ok: [testbed-node-4] 2025-06-02 18:06:24.903866 | orchestrator | ok: [testbed-node-5] 2025-06-02 18:06:24.903872 | orchestrator | 2025-06-02 18:06:24.903877 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-02 18:06:24.903883 | orchestrator | Monday 02 June 2025 18:06:19 +0000 (0:00:00.327) 0:00:08.722 *********** 2025-06-02 18:06:24.903892 | orchestrator | skipping: [testbed-node-3] 2025-06-02 18:06:24.903899 | orchestrator | 2025-06-02 18:06:24.903907 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-02 18:06:24.903915 | orchestrator | Monday 02 June 2025 18:06:20 +0000 (0:00:00.776) 0:00:09.498 *********** 2025-06-02 18:06:24.903922 | orchestrator | skipping: [testbed-node-3] 2025-06-02 18:06:24.903931 | orchestrator | 2025-06-02 18:06:24.903941 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-02 18:06:24.903949 | orchestrator | Monday 02 June 2025 18:06:20 +0000 (0:00:00.291) 0:00:09.789 *********** 2025-06-02 18:06:24.903957 | orchestrator | skipping: [testbed-node-3] 2025-06-02 18:06:24.903962 | orchestrator | 2025-06-02 18:06:24.903968 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 18:06:24.903974 | orchestrator | Monday 02 June 2025 18:06:20 +0000 (0:00:00.252) 0:00:10.042 *********** 2025-06-02 18:06:24.903979 | orchestrator | 2025-06-02 18:06:24.903985 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 18:06:24.903990 | orchestrator | Monday 02 June 2025 18:06:20 +0000 (0:00:00.069) 0:00:10.111 *********** 2025-06-02 18:06:24.903996 | orchestrator | 2025-06-02 18:06:24.904002 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 18:06:24.904007 | orchestrator | Monday 02 June 2025 18:06:20 +0000 (0:00:00.069) 0:00:10.181 *********** 2025-06-02 18:06:24.904013 | orchestrator | 2025-06-02 18:06:24.904018 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-02 18:06:24.904024 | orchestrator | Monday 02 June 2025 18:06:21 +0000 (0:00:00.070) 0:00:10.251 *********** 2025-06-02 18:06:24.904030 | orchestrator | skipping: [testbed-node-3] 2025-06-02 18:06:24.904036 | orchestrator | 2025-06-02 18:06:24.904041 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-06-02 18:06:24.904047 | orchestrator | Monday 02 June 2025 18:06:21 +0000 (0:00:00.257) 0:00:10.509 *********** 2025-06-02 18:06:24.904052 | orchestrator | skipping: [testbed-node-3] 2025-06-02 18:06:24.904057 | orchestrator | 2025-06-02 18:06:24.904062 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-02 18:06:24.904066 | orchestrator | Monday 02 June 2025 18:06:21 +0000 (0:00:00.340) 0:00:10.849 *********** 2025-06-02 18:06:24.904071 | orchestrator | ok: [testbed-node-3] 2025-06-02 18:06:24.904076 | orchestrator | ok: [testbed-node-4] 2025-06-02 18:06:24.904081 | orchestrator | ok: [testbed-node-5] 2025-06-02 18:06:24.904085 | orchestrator | 2025-06-02 18:06:24.904090 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-06-02 18:06:24.904095 | orchestrator | Monday 02 June 2025 18:06:21 +0000 (0:00:00.344) 0:00:11.194 *********** 2025-06-02 18:06:24.904099 | orchestrator | ok: [testbed-node-3] 2025-06-02 18:06:24.904104 | orchestrator | 2025-06-02 18:06:24.904109 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-06-02 18:06:24.904113 | orchestrator | Monday 02 June 2025 18:06:22 +0000 (0:00:00.757) 0:00:11.951 *********** 2025-06-02 18:06:24.904118 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-02 18:06:24.904123 | orchestrator | 2025-06-02 18:06:24.904128 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-06-02 18:06:24.904137 | orchestrator | Monday 02 June 2025 18:06:24 +0000 (0:00:01.596) 0:00:13.547 *********** 2025-06-02 18:06:24.904142 | orchestrator | ok: [testbed-node-3] 2025-06-02 18:06:24.904147 | orchestrator | 2025-06-02 18:06:24.904152 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-06-02 18:06:24.904157 | orchestrator | Monday 02 June 2025 18:06:24 +0000 (0:00:00.119) 0:00:13.667 *********** 2025-06-02 18:06:24.904161 | orchestrator | ok: [testbed-node-3] 2025-06-02 18:06:24.904194 | orchestrator | 2025-06-02 18:06:24.904200 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-06-02 18:06:24.904205 | orchestrator | Monday 02 June 2025 18:06:24 +0000 (0:00:00.357) 0:00:14.024 *********** 2025-06-02 18:06:24.904215 | orchestrator | skipping: [testbed-node-3] 2025-06-02 18:06:38.540968 | orchestrator | 2025-06-02 18:06:38.541075 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-06-02 18:06:38.541092 | orchestrator | Monday 02 June 2025 18:06:24 +0000 (0:00:00.123) 0:00:14.148 *********** 2025-06-02 18:06:38.541104 | orchestrator | ok: [testbed-node-3] 2025-06-02 18:06:38.541116 | orchestrator | 2025-06-02 18:06:38.541127 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-02 18:06:38.541139 | orchestrator | Monday 02 June 2025 18:06:25 +0000 (0:00:00.187) 0:00:14.335 *********** 2025-06-02 18:06:38.541150 | orchestrator | ok: [testbed-node-3] 2025-06-02 18:06:38.541161 | orchestrator | ok: [testbed-node-4] 2025-06-02 18:06:38.541172 | orchestrator | ok: [testbed-node-5] 2025-06-02 18:06:38.541183 | orchestrator | 2025-06-02 18:06:38.541194 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-06-02 18:06:38.541205 | orchestrator | Monday 02 June 2025 18:06:25 +0000 (0:00:00.325) 0:00:14.660 *********** 2025-06-02 18:06:38.541217 | orchestrator | changed: [testbed-node-3] 2025-06-02 18:06:38.541228 | orchestrator | changed: [testbed-node-4] 2025-06-02 18:06:38.541239 | orchestrator | changed: [testbed-node-5] 2025-06-02 18:06:38.541250 | orchestrator | 2025-06-02 18:06:38.541261 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-06-02 18:06:38.541272 | orchestrator | Monday 02 June 2025 18:06:28 +0000 (0:00:02.728) 0:00:17.389 *********** 2025-06-02 18:06:38.541283 | orchestrator | ok: [testbed-node-3] 2025-06-02 18:06:38.541294 | orchestrator | ok: [testbed-node-4] 2025-06-02 18:06:38.541309 | orchestrator | ok: [testbed-node-5] 2025-06-02 18:06:38.541327 | orchestrator | 2025-06-02 18:06:38.541346 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-06-02 18:06:38.541371 | orchestrator | Monday 02 June 2025 18:06:28 +0000 (0:00:00.338) 0:00:17.727 *********** 2025-06-02 18:06:38.541397 | orchestrator | ok: [testbed-node-3] 2025-06-02 18:06:38.541413 | orchestrator | ok: [testbed-node-4] 2025-06-02 18:06:38.541431 | orchestrator | ok: [testbed-node-5] 2025-06-02 18:06:38.541447 | orchestrator | 2025-06-02 18:06:38.541466 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-06-02 18:06:38.541484 | orchestrator | Monday 02 June 2025 18:06:28 +0000 (0:00:00.501) 0:00:18.229 *********** 2025-06-02 18:06:38.541502 | orchestrator | skipping: [testbed-node-3] 2025-06-02 18:06:38.541521 | orchestrator | skipping: [testbed-node-4] 2025-06-02 18:06:38.541541 | orchestrator | skipping: [testbed-node-5] 2025-06-02 18:06:38.541553 | orchestrator | 2025-06-02 18:06:38.541564 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-06-02 18:06:38.541575 | orchestrator | Monday 02 June 2025 18:06:29 +0000 (0:00:00.339) 0:00:18.568 *********** 2025-06-02 18:06:38.541586 | orchestrator | ok: [testbed-node-3] 2025-06-02 18:06:38.541597 | orchestrator | ok: [testbed-node-4] 2025-06-02 18:06:38.541608 | orchestrator | ok: [testbed-node-5] 2025-06-02 18:06:38.541618 | orchestrator | 2025-06-02 18:06:38.541629 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-06-02 18:06:38.541640 | orchestrator | Monday 02 June 2025 18:06:29 +0000 (0:00:00.611) 0:00:19.180 *********** 2025-06-02 18:06:38.541651 | orchestrator | skipping: [testbed-node-3] 2025-06-02 18:06:38.541689 | orchestrator | skipping: [testbed-node-4] 2025-06-02 18:06:38.541700 | orchestrator | skipping: [testbed-node-5] 2025-06-02 18:06:38.541711 | orchestrator | 2025-06-02 18:06:38.541722 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-06-02 18:06:38.541733 | orchestrator | Monday 02 June 2025 18:06:30 +0000 (0:00:00.390) 0:00:19.570 *********** 2025-06-02 18:06:38.541744 | orchestrator | skipping: [testbed-node-3] 2025-06-02 18:06:38.541754 | orchestrator | skipping: [testbed-node-4] 2025-06-02 18:06:38.541765 | orchestrator | skipping: [testbed-node-5] 2025-06-02 18:06:38.541776 | orchestrator | 2025-06-02 18:06:38.541817 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-02 18:06:38.541833 | orchestrator | Monday 02 June 2025 18:06:30 +0000 (0:00:00.341) 0:00:19.912 *********** 2025-06-02 18:06:38.541844 | orchestrator | ok: [testbed-node-3] 2025-06-02 18:06:38.541855 | orchestrator | ok: [testbed-node-4] 2025-06-02 18:06:38.541881 | orchestrator | ok: [testbed-node-5] 2025-06-02 18:06:38.541892 | orchestrator | 2025-06-02 18:06:38.541903 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-06-02 18:06:38.541913 | orchestrator | Monday 02 June 2025 18:06:31 +0000 (0:00:00.540) 0:00:20.453 *********** 2025-06-02 18:06:38.541924 | orchestrator | ok: [testbed-node-3] 2025-06-02 18:06:38.541935 | orchestrator | ok: [testbed-node-4] 2025-06-02 18:06:38.541945 | orchestrator | ok: [testbed-node-5] 2025-06-02 18:06:38.541956 | orchestrator | 2025-06-02 18:06:38.541967 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-06-02 18:06:38.541978 | orchestrator | Monday 02 June 2025 18:06:31 +0000 (0:00:00.760) 0:00:21.213 *********** 2025-06-02 18:06:38.541988 | orchestrator | ok: [testbed-node-3] 2025-06-02 18:06:38.541999 | orchestrator | ok: [testbed-node-4] 2025-06-02 18:06:38.542009 | orchestrator | ok: [testbed-node-5] 2025-06-02 18:06:38.542130 | orchestrator | 2025-06-02 18:06:38.542157 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-06-02 18:06:38.542169 | orchestrator | Monday 02 June 2025 18:06:32 +0000 (0:00:00.313) 0:00:21.527 *********** 2025-06-02 18:06:38.542180 | orchestrator | skipping: [testbed-node-3] 2025-06-02 18:06:38.542191 | orchestrator | skipping: [testbed-node-4] 2025-06-02 18:06:38.542201 | orchestrator | skipping: [testbed-node-5] 2025-06-02 18:06:38.542212 | orchestrator | 2025-06-02 18:06:38.542223 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-06-02 18:06:38.542235 | orchestrator | Monday 02 June 2025 18:06:32 +0000 (0:00:00.298) 0:00:21.825 *********** 2025-06-02 18:06:38.542245 | orchestrator | ok: [testbed-node-3] 2025-06-02 18:06:38.542256 | orchestrator | ok: [testbed-node-4] 2025-06-02 18:06:38.542267 | orchestrator | ok: [testbed-node-5] 2025-06-02 18:06:38.542277 | orchestrator | 2025-06-02 18:06:38.542288 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-06-02 18:06:38.542299 | orchestrator | Monday 02 June 2025 18:06:32 +0000 (0:00:00.337) 0:00:22.162 *********** 2025-06-02 18:06:38.542310 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 18:06:38.542323 | orchestrator | 2025-06-02 18:06:38.542342 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-06-02 18:06:38.542360 | orchestrator | Monday 02 June 2025 18:06:33 +0000 (0:00:00.807) 0:00:22.969 *********** 2025-06-02 18:06:38.542379 | orchestrator | skipping: [testbed-node-3] 2025-06-02 18:06:38.542397 | orchestrator | 2025-06-02 18:06:38.542433 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-02 18:06:38.542498 | orchestrator | Monday 02 June 2025 18:06:33 +0000 (0:00:00.259) 0:00:23.229 *********** 2025-06-02 18:06:38.542513 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 18:06:38.542531 | orchestrator | 2025-06-02 18:06:38.542550 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-02 18:06:38.542569 | orchestrator | Monday 02 June 2025 18:06:35 +0000 (0:00:01.788) 0:00:25.018 *********** 2025-06-02 18:06:38.542589 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 18:06:38.542625 | orchestrator | 2025-06-02 18:06:38.542644 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-02 18:06:38.542662 | orchestrator | Monday 02 June 2025 18:06:36 +0000 (0:00:00.282) 0:00:25.300 *********** 2025-06-02 18:06:38.542681 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 18:06:38.542700 | orchestrator | 2025-06-02 18:06:38.542718 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 18:06:38.542753 | orchestrator | Monday 02 June 2025 18:06:36 +0000 (0:00:00.272) 0:00:25.573 *********** 2025-06-02 18:06:38.542764 | orchestrator | 2025-06-02 18:06:38.542775 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 18:06:38.542817 | orchestrator | Monday 02 June 2025 18:06:36 +0000 (0:00:00.070) 0:00:25.643 *********** 2025-06-02 18:06:38.542830 | orchestrator | 2025-06-02 18:06:38.542841 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 18:06:38.542853 | orchestrator | Monday 02 June 2025 18:06:36 +0000 (0:00:00.099) 0:00:25.742 *********** 2025-06-02 18:06:38.542864 | orchestrator | 2025-06-02 18:06:38.542875 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-06-02 18:06:38.542885 | orchestrator | Monday 02 June 2025 18:06:36 +0000 (0:00:00.075) 0:00:25.818 *********** 2025-06-02 18:06:38.542896 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 18:06:38.542907 | orchestrator | 2025-06-02 18:06:38.542918 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-02 18:06:38.542929 | orchestrator | Monday 02 June 2025 18:06:37 +0000 (0:00:01.332) 0:00:27.150 *********** 2025-06-02 18:06:38.542939 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-06-02 18:06:38.542950 | orchestrator |  "msg": [ 2025-06-02 18:06:38.542962 | orchestrator |  "Validator run completed.", 2025-06-02 18:06:38.542973 | orchestrator |  "You can find the report file here:", 2025-06-02 18:06:38.542984 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-06-02T18:06:11+00:00-report.json", 2025-06-02 18:06:38.542997 | orchestrator |  "on the following host:", 2025-06-02 18:06:38.543008 | orchestrator |  "testbed-manager" 2025-06-02 18:06:38.543019 | orchestrator |  ] 2025-06-02 18:06:38.543031 | orchestrator | } 2025-06-02 18:06:38.543042 | orchestrator | 2025-06-02 18:06:38.543053 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 18:06:38.543065 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-06-02 18:06:38.543078 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-02 18:06:38.543089 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-02 18:06:38.543100 | orchestrator | 2025-06-02 18:06:38.543111 | orchestrator | 2025-06-02 18:06:38.543131 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 18:06:38.543142 | orchestrator | Monday 02 June 2025 18:06:38 +0000 (0:00:00.606) 0:00:27.756 *********** 2025-06-02 18:06:38.543153 | orchestrator | =============================================================================== 2025-06-02 18:06:38.543164 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.73s 2025-06-02 18:06:38.543175 | orchestrator | Aggregate test results step one ----------------------------------------- 1.79s 2025-06-02 18:06:38.543186 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.60s 2025-06-02 18:06:38.543197 | orchestrator | Write report file ------------------------------------------------------- 1.33s 2025-06-02 18:06:38.543208 | orchestrator | Create report output directory ------------------------------------------ 0.98s 2025-06-02 18:06:38.543219 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.81s 2025-06-02 18:06:38.543239 | orchestrator | Aggregate test results step one ----------------------------------------- 0.78s 2025-06-02 18:06:38.543250 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.76s 2025-06-02 18:06:38.543260 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.76s 2025-06-02 18:06:38.543272 | orchestrator | Get timestamp for report file ------------------------------------------- 0.69s 2025-06-02 18:06:38.543282 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.62s 2025-06-02 18:06:38.543293 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.61s 2025-06-02 18:06:38.543304 | orchestrator | Print report file information ------------------------------------------- 0.61s 2025-06-02 18:06:38.543321 | orchestrator | Prepare test data ------------------------------------------------------- 0.58s 2025-06-02 18:06:38.543338 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.55s 2025-06-02 18:06:38.543357 | orchestrator | Prepare test data ------------------------------------------------------- 0.54s 2025-06-02 18:06:38.543388 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.53s 2025-06-02 18:06:38.854000 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.51s 2025-06-02 18:06:38.854160 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.50s 2025-06-02 18:06:38.854176 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.46s 2025-06-02 18:06:39.139686 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2025-06-02 18:06:39.149996 | orchestrator | + set -e 2025-06-02 18:06:39.150098 | orchestrator | + source /opt/manager-vars.sh 2025-06-02 18:06:39.150106 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-02 18:06:39.150111 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-02 18:06:39.150116 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-02 18:06:39.150120 | orchestrator | ++ CEPH_VERSION=reef 2025-06-02 18:06:39.150125 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-02 18:06:39.150130 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-02 18:06:39.150134 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-02 18:06:39.150139 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-02 18:06:39.150143 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-02 18:06:39.150147 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-02 18:06:39.150151 | orchestrator | ++ export ARA=false 2025-06-02 18:06:39.150156 | orchestrator | ++ ARA=false 2025-06-02 18:06:39.150161 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-02 18:06:39.150165 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-02 18:06:39.150169 | orchestrator | ++ export TEMPEST=false 2025-06-02 18:06:39.150173 | orchestrator | ++ TEMPEST=false 2025-06-02 18:06:39.150178 | orchestrator | ++ export IS_ZUUL=true 2025-06-02 18:06:39.150182 | orchestrator | ++ IS_ZUUL=true 2025-06-02 18:06:39.150186 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.157 2025-06-02 18:06:39.150190 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.157 2025-06-02 18:06:39.150194 | orchestrator | ++ export EXTERNAL_API=false 2025-06-02 18:06:39.150198 | orchestrator | ++ EXTERNAL_API=false 2025-06-02 18:06:39.150202 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-02 18:06:39.150206 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-02 18:06:39.150210 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-02 18:06:39.150215 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-02 18:06:39.150219 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-02 18:06:39.150227 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-02 18:06:39.150234 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-06-02 18:06:39.150243 | orchestrator | + source /etc/os-release 2025-06-02 18:06:39.150251 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.2 LTS' 2025-06-02 18:06:39.150259 | orchestrator | ++ NAME=Ubuntu 2025-06-02 18:06:39.150265 | orchestrator | ++ VERSION_ID=24.04 2025-06-02 18:06:39.150272 | orchestrator | ++ VERSION='24.04.2 LTS (Noble Numbat)' 2025-06-02 18:06:39.150278 | orchestrator | ++ VERSION_CODENAME=noble 2025-06-02 18:06:39.150286 | orchestrator | ++ ID=ubuntu 2025-06-02 18:06:39.150292 | orchestrator | ++ ID_LIKE=debian 2025-06-02 18:06:39.150298 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-06-02 18:06:39.150306 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-06-02 18:06:39.150312 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-06-02 18:06:39.150318 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-06-02 18:06:39.150348 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-06-02 18:06:39.150355 | orchestrator | ++ LOGO=ubuntu-logo 2025-06-02 18:06:39.150362 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-06-02 18:06:39.150370 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-06-02 18:06:39.150377 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-06-02 18:06:39.184512 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-06-02 18:07:04.414527 | orchestrator | 2025-06-02 18:07:04.414631 | orchestrator | # Status of Elasticsearch 2025-06-02 18:07:04.414648 | orchestrator | 2025-06-02 18:07:04.414661 | orchestrator | + pushd /opt/configuration/contrib 2025-06-02 18:07:04.414674 | orchestrator | + echo 2025-06-02 18:07:04.414686 | orchestrator | + echo '# Status of Elasticsearch' 2025-06-02 18:07:04.414697 | orchestrator | + echo 2025-06-02 18:07:04.414709 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-06-02 18:07:04.659280 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2025-06-02 18:07:04.659369 | orchestrator | 2025-06-02 18:07:04.659381 | orchestrator | # Status of MariaDB 2025-06-02 18:07:04.659391 | orchestrator | 2025-06-02 18:07:04.659399 | orchestrator | + echo 2025-06-02 18:07:04.659424 | orchestrator | + echo '# Status of MariaDB' 2025-06-02 18:07:04.659432 | orchestrator | + echo 2025-06-02 18:07:04.659440 | orchestrator | + MARIADB_USER=root_shard_0 2025-06-02 18:07:04.659458 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2025-06-02 18:07:04.738174 | orchestrator | Reading package lists... 2025-06-02 18:07:05.076549 | orchestrator | Building dependency tree... 2025-06-02 18:07:05.078444 | orchestrator | Reading state information... 2025-06-02 18:07:05.552396 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2025-06-02 18:07:05.552504 | orchestrator | bc set to manually installed. 2025-06-02 18:07:05.552520 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2025-06-02 18:07:06.251462 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2025-06-02 18:07:06.251974 | orchestrator | 2025-06-02 18:07:06.252014 | orchestrator | + echo 2025-06-02 18:07:06.252696 | orchestrator | # Status of Prometheus 2025-06-02 18:07:06.252723 | orchestrator | 2025-06-02 18:07:06.252736 | orchestrator | + echo '# Status of Prometheus' 2025-06-02 18:07:06.252750 | orchestrator | + echo 2025-06-02 18:07:06.252800 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2025-06-02 18:07:06.309048 | orchestrator | Unauthorized 2025-06-02 18:07:06.311814 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2025-06-02 18:07:06.379023 | orchestrator | Unauthorized 2025-06-02 18:07:06.383172 | orchestrator | 2025-06-02 18:07:06.383266 | orchestrator | # Status of RabbitMQ 2025-06-02 18:07:06.383290 | orchestrator | 2025-06-02 18:07:06.383308 | orchestrator | + echo 2025-06-02 18:07:06.383320 | orchestrator | + echo '# Status of RabbitMQ' 2025-06-02 18:07:06.383331 | orchestrator | + echo 2025-06-02 18:07:06.383343 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2025-06-02 18:07:06.865330 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2025-06-02 18:07:06.875311 | orchestrator | + echo 2025-06-02 18:07:06.875852 | orchestrator | 2025-06-02 18:07:06.875884 | orchestrator | # Status of Redis 2025-06-02 18:07:06.875898 | orchestrator | 2025-06-02 18:07:06.875910 | orchestrator | + echo '# Status of Redis' 2025-06-02 18:07:06.875923 | orchestrator | + echo 2025-06-02 18:07:06.875937 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2025-06-02 18:07:06.883796 | orchestrator | TCP OK - 0.005 second response time on 192.168.16.10 port 6379|time=0.004861s;;;0.000000;10.000000 2025-06-02 18:07:06.883980 | orchestrator | + popd 2025-06-02 18:07:06.883992 | orchestrator | 2025-06-02 18:07:06.883999 | orchestrator | + echo 2025-06-02 18:07:06.884162 | orchestrator | # Create backup of MariaDB database 2025-06-02 18:07:06.884201 | orchestrator | 2025-06-02 18:07:06.884208 | orchestrator | + echo '# Create backup of MariaDB database' 2025-06-02 18:07:06.884213 | orchestrator | + echo 2025-06-02 18:07:06.884219 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2025-06-02 18:07:08.692553 | orchestrator | 2025-06-02 18:07:08 | INFO  | Task 0cc94836-53b6-45f5-8a46-cb45abf0d2c1 (mariadb_backup) was prepared for execution. 2025-06-02 18:07:08.692660 | orchestrator | 2025-06-02 18:07:08 | INFO  | It takes a moment until task 0cc94836-53b6-45f5-8a46-cb45abf0d2c1 (mariadb_backup) has been started and output is visible here. 2025-06-02 18:07:12.754353 | orchestrator | 2025-06-02 18:07:12.760284 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 18:07:12.761800 | orchestrator | 2025-06-02 18:07:12.763531 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 18:07:12.764752 | orchestrator | Monday 02 June 2025 18:07:12 +0000 (0:00:00.197) 0:00:00.197 *********** 2025-06-02 18:07:12.953178 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:07:13.085729 | orchestrator | ok: [testbed-node-1] 2025-06-02 18:07:13.086217 | orchestrator | ok: [testbed-node-2] 2025-06-02 18:07:13.086244 | orchestrator | 2025-06-02 18:07:13.086521 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 18:07:13.087102 | orchestrator | Monday 02 June 2025 18:07:13 +0000 (0:00:00.337) 0:00:00.535 *********** 2025-06-02 18:07:13.667050 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-06-02 18:07:13.668220 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-06-02 18:07:13.668460 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-06-02 18:07:13.670499 | orchestrator | 2025-06-02 18:07:13.671354 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-06-02 18:07:13.672285 | orchestrator | 2025-06-02 18:07:13.672980 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-06-02 18:07:13.673425 | orchestrator | Monday 02 June 2025 18:07:13 +0000 (0:00:00.580) 0:00:01.115 *********** 2025-06-02 18:07:14.101799 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 18:07:14.102628 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-02 18:07:14.104187 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-02 18:07:14.105384 | orchestrator | 2025-06-02 18:07:14.106955 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-02 18:07:14.107807 | orchestrator | Monday 02 June 2025 18:07:14 +0000 (0:00:00.431) 0:00:01.547 *********** 2025-06-02 18:07:14.680947 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 18:07:14.681073 | orchestrator | 2025-06-02 18:07:14.681103 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-06-02 18:07:14.681390 | orchestrator | Monday 02 June 2025 18:07:14 +0000 (0:00:00.581) 0:00:02.128 *********** 2025-06-02 18:07:17.925538 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:07:17.926373 | orchestrator | ok: [testbed-node-2] 2025-06-02 18:07:17.928090 | orchestrator | ok: [testbed-node-1] 2025-06-02 18:07:17.928486 | orchestrator | 2025-06-02 18:07:17.928844 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-06-02 18:07:17.933618 | orchestrator | Monday 02 June 2025 18:07:17 +0000 (0:00:03.238) 0:00:05.367 *********** 2025-06-02 18:07:36.369704 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-06-02 18:07:36.369854 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-06-02 18:07:36.373705 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-02 18:07:36.374439 | orchestrator | mariadb_bootstrap_restart 2025-06-02 18:07:36.453414 | orchestrator | skipping: [testbed-node-1] 2025-06-02 18:07:36.455182 | orchestrator | skipping: [testbed-node-2] 2025-06-02 18:07:36.456592 | orchestrator | changed: [testbed-node-0] 2025-06-02 18:07:36.458932 | orchestrator | 2025-06-02 18:07:36.459222 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-06-02 18:07:36.460420 | orchestrator | skipping: no hosts matched 2025-06-02 18:07:36.461150 | orchestrator | 2025-06-02 18:07:36.462388 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-02 18:07:36.462999 | orchestrator | skipping: no hosts matched 2025-06-02 18:07:36.463447 | orchestrator | 2025-06-02 18:07:36.464261 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-06-02 18:07:36.465341 | orchestrator | skipping: no hosts matched 2025-06-02 18:07:36.466079 | orchestrator | 2025-06-02 18:07:36.466991 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-06-02 18:07:36.467402 | orchestrator | 2025-06-02 18:07:36.468489 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-06-02 18:07:36.469080 | orchestrator | Monday 02 June 2025 18:07:36 +0000 (0:00:18.534) 0:00:23.902 *********** 2025-06-02 18:07:36.647554 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:07:36.773205 | orchestrator | skipping: [testbed-node-1] 2025-06-02 18:07:36.774109 | orchestrator | skipping: [testbed-node-2] 2025-06-02 18:07:36.777227 | orchestrator | 2025-06-02 18:07:36.777356 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-06-02 18:07:36.777382 | orchestrator | Monday 02 June 2025 18:07:36 +0000 (0:00:00.318) 0:00:24.220 *********** 2025-06-02 18:07:37.162862 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:07:37.208247 | orchestrator | skipping: [testbed-node-1] 2025-06-02 18:07:37.208349 | orchestrator | skipping: [testbed-node-2] 2025-06-02 18:07:37.209452 | orchestrator | 2025-06-02 18:07:37.210606 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 18:07:37.210861 | orchestrator | 2025-06-02 18:07:37 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 18:07:37.211799 | orchestrator | 2025-06-02 18:07:37 | INFO  | Please wait and do not abort execution. 2025-06-02 18:07:37.212654 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 18:07:37.213222 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 18:07:37.213877 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 18:07:37.214101 | orchestrator | 2025-06-02 18:07:37.214562 | orchestrator | 2025-06-02 18:07:37.214953 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 18:07:37.215437 | orchestrator | Monday 02 June 2025 18:07:37 +0000 (0:00:00.434) 0:00:24.655 *********** 2025-06-02 18:07:37.216009 | orchestrator | =============================================================================== 2025-06-02 18:07:37.216320 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 18.53s 2025-06-02 18:07:37.216976 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.24s 2025-06-02 18:07:37.217286 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.58s 2025-06-02 18:07:37.217945 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.58s 2025-06-02 18:07:37.218190 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.43s 2025-06-02 18:07:37.218682 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.43s 2025-06-02 18:07:37.219935 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2025-06-02 18:07:37.220826 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.32s 2025-06-02 18:07:37.865039 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2025-06-02 18:07:37.870303 | orchestrator | + set -e 2025-06-02 18:07:37.870507 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-02 18:07:37.870524 | orchestrator | ++ export INTERACTIVE=false 2025-06-02 18:07:37.870558 | orchestrator | ++ INTERACTIVE=false 2025-06-02 18:07:37.870567 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-02 18:07:37.870577 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-02 18:07:37.870586 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-06-02 18:07:37.871792 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-06-02 18:07:37.880186 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-02 18:07:37.880273 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-02 18:07:37.880304 | orchestrator | 2025-06-02 18:07:37.880325 | orchestrator | # OpenStack endpoints 2025-06-02 18:07:37.880343 | orchestrator | + export OS_CLOUD=admin 2025-06-02 18:07:37.880362 | orchestrator | + OS_CLOUD=admin 2025-06-02 18:07:37.880381 | orchestrator | + echo 2025-06-02 18:07:37.880399 | orchestrator | 2025-06-02 18:07:37.880411 | orchestrator | + echo '# OpenStack endpoints' 2025-06-02 18:07:37.880422 | orchestrator | + echo 2025-06-02 18:07:37.880432 | orchestrator | + openstack endpoint list 2025-06-02 18:07:41.266240 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-06-02 18:07:41.266343 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2025-06-02 18:07:41.266352 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-06-02 18:07:41.266359 | orchestrator | | 1215b541a8074e70bd717abac201d572 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2025-06-02 18:07:41.266365 | orchestrator | | 154b571da6734af0b693857b61c3e678 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2025-06-02 18:07:41.266391 | orchestrator | | 22965ec9deb645ab92c8e4a16f64c01d | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2025-06-02 18:07:41.266398 | orchestrator | | 3491622221c4412daa3a80d4029df554 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2025-06-02 18:07:41.266404 | orchestrator | | 381c2ea402e74a1bb68bfcae5f422fe3 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-06-02 18:07:41.266411 | orchestrator | | 56b4de811c3540da9df4b34daffc9ee3 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-06-02 18:07:41.266417 | orchestrator | | 770a13dfa85e4a6cbbd219f6754e32c0 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2025-06-02 18:07:41.266423 | orchestrator | | 86ec9c2b05ba4fed9ab379bbbf1487f6 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2025-06-02 18:07:41.266430 | orchestrator | | 8aef44c8441a445eafa1eb67ea5c8a99 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2025-06-02 18:07:41.266436 | orchestrator | | 955e29d09fbe42ac851734dda7c2f6eb | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2025-06-02 18:07:41.266442 | orchestrator | | 9c101bcf7e8a45528fec6b99a8afa49e | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2025-06-02 18:07:41.266448 | orchestrator | | abdde5fa37f7449193b044abe95b6dee | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2025-06-02 18:07:41.266455 | orchestrator | | aeb53b9b988a4aaf96f7df04c6257b05 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-06-02 18:07:41.266485 | orchestrator | | b3734ac1b3e4415c953de93d83121c77 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2025-06-02 18:07:41.266492 | orchestrator | | cd7db8b332624662a351c4794e9bd357 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2025-06-02 18:07:41.266498 | orchestrator | | cea102c068464f1a84432f9cbab1ec81 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-06-02 18:07:41.266504 | orchestrator | | dfc821fab0514d798aa1b2e6c8d51924 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2025-06-02 18:07:41.266510 | orchestrator | | eac9b13529b4495da1ad6782e247f709 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2025-06-02 18:07:41.266517 | orchestrator | | eb912dfe0a1f4699b6784df9315529b1 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2025-06-02 18:07:41.266523 | orchestrator | | f1ac0bfff09145d59581205574735dae | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2025-06-02 18:07:41.266546 | orchestrator | | f2a704f6c2ce47029c76f71f8ad9a839 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2025-06-02 18:07:41.266553 | orchestrator | | fafbe60209e440aca83e7382900f3e9a | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2025-06-02 18:07:41.266558 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-06-02 18:07:41.552430 | orchestrator | 2025-06-02 18:07:41.552519 | orchestrator | # Cinder 2025-06-02 18:07:41.552528 | orchestrator | 2025-06-02 18:07:41.552535 | orchestrator | + echo 2025-06-02 18:07:41.552542 | orchestrator | + echo '# Cinder' 2025-06-02 18:07:41.552549 | orchestrator | + echo 2025-06-02 18:07:41.552555 | orchestrator | + openstack volume service list 2025-06-02 18:07:44.291013 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-06-02 18:07:44.291119 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2025-06-02 18:07:44.291134 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-06-02 18:07:44.291145 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2025-06-02T18:07:44.000000 | 2025-06-02 18:07:44.291175 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2025-06-02T18:07:37.000000 | 2025-06-02 18:07:44.291187 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2025-06-02T18:07:37.000000 | 2025-06-02 18:07:44.291198 | orchestrator | | cinder-volume | testbed-node-5@rbd-volumes | nova | enabled | up | 2025-06-02T18:07:34.000000 | 2025-06-02 18:07:44.291209 | orchestrator | | cinder-volume | testbed-node-3@rbd-volumes | nova | enabled | up | 2025-06-02T18:07:36.000000 | 2025-06-02 18:07:44.291220 | orchestrator | | cinder-volume | testbed-node-4@rbd-volumes | nova | enabled | up | 2025-06-02T18:07:38.000000 | 2025-06-02 18:07:44.291232 | orchestrator | | cinder-backup | testbed-node-3 | nova | enabled | up | 2025-06-02T18:07:39.000000 | 2025-06-02 18:07:44.291243 | orchestrator | | cinder-backup | testbed-node-5 | nova | enabled | up | 2025-06-02T18:07:40.000000 | 2025-06-02 18:07:44.291254 | orchestrator | | cinder-backup | testbed-node-4 | nova | enabled | up | 2025-06-02T18:07:41.000000 | 2025-06-02 18:07:44.291266 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-06-02 18:07:44.578231 | orchestrator | 2025-06-02 18:07:44.578333 | orchestrator | # Neutron 2025-06-02 18:07:44.578348 | orchestrator | 2025-06-02 18:07:44.578359 | orchestrator | + echo 2025-06-02 18:07:44.578371 | orchestrator | + echo '# Neutron' 2025-06-02 18:07:44.578384 | orchestrator | + echo 2025-06-02 18:07:44.578395 | orchestrator | + openstack network agent list 2025-06-02 18:07:47.423962 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-06-02 18:07:47.424090 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2025-06-02 18:07:47.424113 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-06-02 18:07:47.424131 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2025-06-02 18:07:47.424147 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2025-06-02 18:07:47.424164 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2025-06-02 18:07:47.424181 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2025-06-02 18:07:47.424197 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2025-06-02 18:07:47.424211 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2025-06-02 18:07:47.424221 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2025-06-02 18:07:47.424231 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2025-06-02 18:07:47.424240 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2025-06-02 18:07:47.424250 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-06-02 18:07:47.734193 | orchestrator | + openstack network service provider list 2025-06-02 18:07:50.348812 | orchestrator | +---------------+------+---------+ 2025-06-02 18:07:50.348917 | orchestrator | | Service Type | Name | Default | 2025-06-02 18:07:50.348930 | orchestrator | +---------------+------+---------+ 2025-06-02 18:07:50.348940 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2025-06-02 18:07:50.348950 | orchestrator | +---------------+------+---------+ 2025-06-02 18:07:50.658489 | orchestrator | 2025-06-02 18:07:50.658587 | orchestrator | # Nova 2025-06-02 18:07:50.658605 | orchestrator | 2025-06-02 18:07:50.658618 | orchestrator | + echo 2025-06-02 18:07:50.658632 | orchestrator | + echo '# Nova' 2025-06-02 18:07:50.658647 | orchestrator | + echo 2025-06-02 18:07:50.658661 | orchestrator | + openstack compute service list 2025-06-02 18:07:53.497172 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-06-02 18:07:53.497282 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2025-06-02 18:07:53.497297 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-06-02 18:07:53.497309 | orchestrator | | 727a9cfe-27d2-400c-a69f-3c25612f36ae | nova-scheduler | testbed-node-2 | internal | enabled | up | 2025-06-02T18:07:48.000000 | 2025-06-02 18:07:53.497320 | orchestrator | | fd4cca29-e99c-4891-8482-a82e1a6cb942 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2025-06-02T18:07:48.000000 | 2025-06-02 18:07:53.497358 | orchestrator | | 48bbe21e-b0d3-48ac-ae8b-4f2c9803ef06 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2025-06-02T18:07:51.000000 | 2025-06-02 18:07:53.497385 | orchestrator | | 39a36fe6-4365-4f1b-8d1b-2670160d3f9c | nova-conductor | testbed-node-1 | internal | enabled | up | 2025-06-02T18:07:49.000000 | 2025-06-02 18:07:53.497396 | orchestrator | | aad6ff50-98ad-4605-8290-3a55c2c71531 | nova-conductor | testbed-node-2 | internal | enabled | up | 2025-06-02T18:07:49.000000 | 2025-06-02 18:07:53.497407 | orchestrator | | 9067f713-741f-42a8-a8c8-a2399e171315 | nova-conductor | testbed-node-0 | internal | enabled | up | 2025-06-02T18:07:52.000000 | 2025-06-02 18:07:53.497418 | orchestrator | | becaf58c-7bcd-410a-a3b5-67ce8724750b | nova-compute | testbed-node-3 | nova | enabled | up | 2025-06-02T18:07:53.000000 | 2025-06-02 18:07:53.497429 | orchestrator | | dfc821fa-0a5c-469f-8ca0-c8b0da1d3e47 | nova-compute | testbed-node-4 | nova | enabled | up | 2025-06-02T18:07:43.000000 | 2025-06-02 18:07:53.497440 | orchestrator | | 7ea2c1c0-091b-4fc3-97d1-e9bab86b30bc | nova-compute | testbed-node-5 | nova | enabled | up | 2025-06-02T18:07:43.000000 | 2025-06-02 18:07:53.497451 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-06-02 18:07:53.806592 | orchestrator | + openstack hypervisor list 2025-06-02 18:07:58.216283 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-06-02 18:07:58.216368 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2025-06-02 18:07:58.216377 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-06-02 18:07:58.216384 | orchestrator | | 9100edaa-6ff8-4da6-9faf-f309ca633e79 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2025-06-02 18:07:58.216391 | orchestrator | | 696c2175-031b-4294-9307-b31f839a12e8 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2025-06-02 18:07:58.216397 | orchestrator | | bf6c3e23-a6c6-4835-889b-965ec4302746 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2025-06-02 18:07:58.216403 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-06-02 18:07:58.515964 | orchestrator | 2025-06-02 18:07:58.516055 | orchestrator | # Run OpenStack test play 2025-06-02 18:07:58.516067 | orchestrator | 2025-06-02 18:07:58.516076 | orchestrator | + echo 2025-06-02 18:07:58.516085 | orchestrator | + echo '# Run OpenStack test play' 2025-06-02 18:07:58.516095 | orchestrator | + echo 2025-06-02 18:07:58.516103 | orchestrator | + osism apply --environment openstack test 2025-06-02 18:08:00.249116 | orchestrator | 2025-06-02 18:08:00 | INFO  | Trying to run play test in environment openstack 2025-06-02 18:08:00.254075 | orchestrator | Registering Redlock._acquired_script 2025-06-02 18:08:00.254149 | orchestrator | Registering Redlock._extend_script 2025-06-02 18:08:00.254159 | orchestrator | Registering Redlock._release_script 2025-06-02 18:08:00.318335 | orchestrator | 2025-06-02 18:08:00 | INFO  | Task e3787238-0625-4078-a92d-e778ce507ada (test) was prepared for execution. 2025-06-02 18:08:00.318429 | orchestrator | 2025-06-02 18:08:00 | INFO  | It takes a moment until task e3787238-0625-4078-a92d-e778ce507ada (test) has been started and output is visible here. 2025-06-02 18:08:04.322514 | orchestrator | 2025-06-02 18:08:04.324062 | orchestrator | PLAY [Create test project] ***************************************************** 2025-06-02 18:08:04.325053 | orchestrator | 2025-06-02 18:08:04.327156 | orchestrator | TASK [Create test domain] ****************************************************** 2025-06-02 18:08:04.327534 | orchestrator | Monday 02 June 2025 18:08:04 +0000 (0:00:00.077) 0:00:00.077 *********** 2025-06-02 18:08:08.045281 | orchestrator | changed: [localhost] 2025-06-02 18:08:08.045528 | orchestrator | 2025-06-02 18:08:08.046352 | orchestrator | TASK [Create test-admin user] ************************************************** 2025-06-02 18:08:08.047107 | orchestrator | Monday 02 June 2025 18:08:08 +0000 (0:00:03.724) 0:00:03.801 *********** 2025-06-02 18:08:12.257496 | orchestrator | changed: [localhost] 2025-06-02 18:08:12.260330 | orchestrator | 2025-06-02 18:08:12.260406 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2025-06-02 18:08:12.262073 | orchestrator | Monday 02 June 2025 18:08:12 +0000 (0:00:04.212) 0:00:08.013 *********** 2025-06-02 18:08:18.721377 | orchestrator | changed: [localhost] 2025-06-02 18:08:18.722416 | orchestrator | 2025-06-02 18:08:18.723493 | orchestrator | TASK [Create test project] ***************************************************** 2025-06-02 18:08:18.725021 | orchestrator | Monday 02 June 2025 18:08:18 +0000 (0:00:06.463) 0:00:14.477 *********** 2025-06-02 18:08:22.799088 | orchestrator | changed: [localhost] 2025-06-02 18:08:22.799269 | orchestrator | 2025-06-02 18:08:22.799289 | orchestrator | TASK [Create test user] ******************************************************** 2025-06-02 18:08:22.799303 | orchestrator | Monday 02 June 2025 18:08:22 +0000 (0:00:04.068) 0:00:18.546 *********** 2025-06-02 18:08:27.057438 | orchestrator | changed: [localhost] 2025-06-02 18:08:27.058695 | orchestrator | 2025-06-02 18:08:27.058786 | orchestrator | TASK [Add member roles to user test] ******************************************* 2025-06-02 18:08:27.059795 | orchestrator | Monday 02 June 2025 18:08:27 +0000 (0:00:04.266) 0:00:22.812 *********** 2025-06-02 18:08:39.457462 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2025-06-02 18:08:39.457575 | orchestrator | changed: [localhost] => (item=member) 2025-06-02 18:08:39.457590 | orchestrator | changed: [localhost] => (item=creator) 2025-06-02 18:08:39.459276 | orchestrator | 2025-06-02 18:08:39.460796 | orchestrator | TASK [Create test server group] ************************************************ 2025-06-02 18:08:39.461495 | orchestrator | Monday 02 June 2025 18:08:39 +0000 (0:00:12.395) 0:00:35.208 *********** 2025-06-02 18:08:44.507089 | orchestrator | changed: [localhost] 2025-06-02 18:08:44.507188 | orchestrator | 2025-06-02 18:08:44.507203 | orchestrator | TASK [Create ssh security group] *********************************************** 2025-06-02 18:08:44.507216 | orchestrator | Monday 02 June 2025 18:08:44 +0000 (0:00:05.044) 0:00:40.253 *********** 2025-06-02 18:08:49.708686 | orchestrator | changed: [localhost] 2025-06-02 18:08:49.708834 | orchestrator | 2025-06-02 18:08:49.710670 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2025-06-02 18:08:49.710779 | orchestrator | Monday 02 June 2025 18:08:49 +0000 (0:00:05.205) 0:00:45.458 *********** 2025-06-02 18:08:54.063435 | orchestrator | changed: [localhost] 2025-06-02 18:08:54.063845 | orchestrator | 2025-06-02 18:08:54.065258 | orchestrator | TASK [Create icmp security group] ********************************************** 2025-06-02 18:08:54.065518 | orchestrator | Monday 02 June 2025 18:08:54 +0000 (0:00:04.359) 0:00:49.818 *********** 2025-06-02 18:08:58.165354 | orchestrator | changed: [localhost] 2025-06-02 18:08:58.167640 | orchestrator | 2025-06-02 18:08:58.172566 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2025-06-02 18:08:58.172633 | orchestrator | Monday 02 June 2025 18:08:58 +0000 (0:00:04.102) 0:00:53.921 *********** 2025-06-02 18:09:02.412167 | orchestrator | changed: [localhost] 2025-06-02 18:09:02.412741 | orchestrator | 2025-06-02 18:09:02.413350 | orchestrator | TASK [Create test keypair] ***************************************************** 2025-06-02 18:09:02.414999 | orchestrator | Monday 02 June 2025 18:09:02 +0000 (0:00:04.247) 0:00:58.169 *********** 2025-06-02 18:09:06.372431 | orchestrator | changed: [localhost] 2025-06-02 18:09:06.372540 | orchestrator | 2025-06-02 18:09:06.373073 | orchestrator | TASK [Create test network topology] ******************************************** 2025-06-02 18:09:06.374297 | orchestrator | Monday 02 June 2025 18:09:06 +0000 (0:00:03.956) 0:01:02.125 *********** 2025-06-02 18:09:21.089293 | orchestrator | changed: [localhost] 2025-06-02 18:09:21.089403 | orchestrator | 2025-06-02 18:09:21.089418 | orchestrator | TASK [Create test instances] *************************************************** 2025-06-02 18:09:21.089429 | orchestrator | Monday 02 June 2025 18:09:21 +0000 (0:00:14.715) 0:01:16.840 *********** 2025-06-02 18:11:34.199407 | orchestrator | changed: [localhost] => (item=test) 2025-06-02 18:11:34.199531 | orchestrator | changed: [localhost] => (item=test-1) 2025-06-02 18:11:34.199567 | orchestrator | changed: [localhost] => (item=test-2) 2025-06-02 18:11:34.199577 | orchestrator | 2025-06-02 18:11:34.199787 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-06-02 18:12:04.196420 | orchestrator | 2025-06-02 18:12:04.196535 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-06-02 18:12:34.197092 | orchestrator | changed: [localhost] => (item=test-3) 2025-06-02 18:12:34.197196 | orchestrator | 2025-06-02 18:12:34.197210 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-06-02 18:12:44.727111 | orchestrator | changed: [localhost] => (item=test-4) 2025-06-02 18:12:44.727222 | orchestrator | 2025-06-02 18:12:44.727239 | orchestrator | TASK [Add metadata to instances] *********************************************** 2025-06-02 18:12:44.727253 | orchestrator | Monday 02 June 2025 18:12:44 +0000 (0:03:23.639) 0:04:40.480 *********** 2025-06-02 18:13:09.305923 | orchestrator | changed: [localhost] => (item=test) 2025-06-02 18:13:09.307025 | orchestrator | changed: [localhost] => (item=test-1) 2025-06-02 18:13:09.307100 | orchestrator | changed: [localhost] => (item=test-2) 2025-06-02 18:13:09.307122 | orchestrator | changed: [localhost] => (item=test-3) 2025-06-02 18:13:09.307144 | orchestrator | changed: [localhost] => (item=test-4) 2025-06-02 18:13:09.307188 | orchestrator | 2025-06-02 18:13:09.307867 | orchestrator | TASK [Add tag to instances] **************************************************** 2025-06-02 18:13:09.308628 | orchestrator | Monday 02 June 2025 18:13:09 +0000 (0:00:24.578) 0:05:05.059 *********** 2025-06-02 18:13:42.876845 | orchestrator | changed: [localhost] => (item=test) 2025-06-02 18:13:42.876946 | orchestrator | changed: [localhost] => (item=test-1) 2025-06-02 18:13:42.876955 | orchestrator | changed: [localhost] => (item=test-2) 2025-06-02 18:13:42.876959 | orchestrator | changed: [localhost] => (item=test-3) 2025-06-02 18:13:42.876963 | orchestrator | changed: [localhost] => (item=test-4) 2025-06-02 18:13:42.876968 | orchestrator | 2025-06-02 18:13:42.876973 | orchestrator | TASK [Create test volume] ****************************************************** 2025-06-02 18:13:42.877278 | orchestrator | Monday 02 June 2025 18:13:42 +0000 (0:00:33.568) 0:05:38.628 *********** 2025-06-02 18:13:50.334706 | orchestrator | changed: [localhost] 2025-06-02 18:13:50.335861 | orchestrator | 2025-06-02 18:13:50.335900 | orchestrator | TASK [Attach test volume] ****************************************************** 2025-06-02 18:13:50.335911 | orchestrator | Monday 02 June 2025 18:13:50 +0000 (0:00:07.460) 0:05:46.088 *********** 2025-06-02 18:14:04.316795 | orchestrator | changed: [localhost] 2025-06-02 18:14:04.316891 | orchestrator | 2025-06-02 18:14:04.316901 | orchestrator | TASK [Create floating ip address] ********************************************** 2025-06-02 18:14:04.316909 | orchestrator | Monday 02 June 2025 18:14:04 +0000 (0:00:13.976) 0:06:00.065 *********** 2025-06-02 18:14:09.782634 | orchestrator | ok: [localhost] 2025-06-02 18:14:09.782826 | orchestrator | 2025-06-02 18:14:09.783945 | orchestrator | TASK [Print floating ip address] *********************************************** 2025-06-02 18:14:09.785276 | orchestrator | Monday 02 June 2025 18:14:09 +0000 (0:00:05.471) 0:06:05.537 *********** 2025-06-02 18:14:09.827222 | orchestrator | ok: [localhost] => { 2025-06-02 18:14:09.828238 | orchestrator |  "msg": "192.168.112.160" 2025-06-02 18:14:09.829866 | orchestrator | } 2025-06-02 18:14:09.830745 | orchestrator | 2025-06-02 18:14:09.831811 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 18:14:09.832066 | orchestrator | 2025-06-02 18:14:09 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 18:14:09.832104 | orchestrator | 2025-06-02 18:14:09 | INFO  | Please wait and do not abort execution. 2025-06-02 18:14:09.833241 | orchestrator | localhost : ok=20  changed=18  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 18:14:09.833796 | orchestrator | 2025-06-02 18:14:09.834548 | orchestrator | 2025-06-02 18:14:09.834924 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 18:14:09.835619 | orchestrator | Monday 02 June 2025 18:14:09 +0000 (0:00:00.046) 0:06:05.583 *********** 2025-06-02 18:14:09.836053 | orchestrator | =============================================================================== 2025-06-02 18:14:09.836909 | orchestrator | Create test instances ------------------------------------------------- 203.64s 2025-06-02 18:14:09.837392 | orchestrator | Add tag to instances --------------------------------------------------- 33.57s 2025-06-02 18:14:09.837845 | orchestrator | Add metadata to instances ---------------------------------------------- 24.58s 2025-06-02 18:14:09.838543 | orchestrator | Create test network topology ------------------------------------------- 14.72s 2025-06-02 18:14:09.838922 | orchestrator | Attach test volume ----------------------------------------------------- 13.98s 2025-06-02 18:14:09.839847 | orchestrator | Add member roles to user test ------------------------------------------ 12.40s 2025-06-02 18:14:09.840577 | orchestrator | Create test volume ------------------------------------------------------ 7.46s 2025-06-02 18:14:09.841046 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.46s 2025-06-02 18:14:09.841608 | orchestrator | Create floating ip address ---------------------------------------------- 5.47s 2025-06-02 18:14:09.842439 | orchestrator | Create ssh security group ----------------------------------------------- 5.21s 2025-06-02 18:14:09.843412 | orchestrator | Create test server group ------------------------------------------------ 5.04s 2025-06-02 18:14:09.844061 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.36s 2025-06-02 18:14:09.844804 | orchestrator | Create test user -------------------------------------------------------- 4.27s 2025-06-02 18:14:09.845337 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.25s 2025-06-02 18:14:09.846139 | orchestrator | Create test-admin user -------------------------------------------------- 4.21s 2025-06-02 18:14:09.846385 | orchestrator | Create icmp security group ---------------------------------------------- 4.10s 2025-06-02 18:14:09.847719 | orchestrator | Create test project ----------------------------------------------------- 4.07s 2025-06-02 18:14:09.848109 | orchestrator | Create test keypair ----------------------------------------------------- 3.96s 2025-06-02 18:14:09.848529 | orchestrator | Create test domain ------------------------------------------------------ 3.72s 2025-06-02 18:14:09.849103 | orchestrator | Print floating ip address ----------------------------------------------- 0.05s 2025-06-02 18:14:10.423587 | orchestrator | + server_list 2025-06-02 18:14:10.423736 | orchestrator | + openstack --os-cloud test server list 2025-06-02 18:14:14.385478 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-06-02 18:14:14.385568 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2025-06-02 18:14:14.385577 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-06-02 18:14:14.385584 | orchestrator | | 038a8b3e-2b0d-412d-b513-c73dc382b812 | test-4 | ACTIVE | auto_allocated_network=10.42.0.36, 192.168.112.155 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-02 18:14:14.385591 | orchestrator | | 9730bc6b-0d7e-4cd2-9ae1-2e09da63620f | test-3 | ACTIVE | auto_allocated_network=10.42.0.27, 192.168.112.190 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-02 18:14:14.385598 | orchestrator | | c9ca9955-e5be-4fa9-9a33-153d73522b2e | test-2 | ACTIVE | auto_allocated_network=10.42.0.39, 192.168.112.132 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-02 18:14:14.385604 | orchestrator | | d5227dd5-792a-4202-824f-cd9d4a6d4f75 | test-1 | ACTIVE | auto_allocated_network=10.42.0.62, 192.168.112.200 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-02 18:14:14.385611 | orchestrator | | dce816c8-fffd-4945-b0bf-ca0086971ce1 | test | ACTIVE | auto_allocated_network=10.42.0.59, 192.168.112.160 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-02 18:14:14.385617 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-06-02 18:14:14.758400 | orchestrator | + openstack --os-cloud test server show test 2025-06-02 18:14:18.274450 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 18:14:18.274566 | orchestrator | | Field | Value | 2025-06-02 18:14:18.274589 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 18:14:18.274619 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-02 18:14:18.274674 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-02 18:14:18.274696 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-02 18:14:18.274715 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2025-06-02 18:14:18.274733 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-02 18:14:18.274751 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-02 18:14:18.274771 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-02 18:14:18.274790 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-02 18:14:18.274860 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-02 18:14:18.274881 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-02 18:14:18.274893 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-02 18:14:18.274904 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-02 18:14:18.274921 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-02 18:14:18.274933 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-02 18:14:18.274944 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-02 18:14:18.274955 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-02T18:09:50.000000 | 2025-06-02 18:14:18.274968 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-02 18:14:18.274980 | orchestrator | | accessIPv4 | | 2025-06-02 18:14:18.274993 | orchestrator | | accessIPv6 | | 2025-06-02 18:14:18.275013 | orchestrator | | addresses | auto_allocated_network=10.42.0.59, 192.168.112.160 | 2025-06-02 18:14:18.275034 | orchestrator | | config_drive | | 2025-06-02 18:14:18.275047 | orchestrator | | created | 2025-06-02T18:09:28Z | 2025-06-02 18:14:18.275060 | orchestrator | | description | None | 2025-06-02 18:14:18.275077 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-02 18:14:18.275090 | orchestrator | | hostId | 49cb9604f9743aafb41450b85ac8dbb2a1771e975fd15417d706910a | 2025-06-02 18:14:18.275101 | orchestrator | | host_status | None | 2025-06-02 18:14:18.275112 | orchestrator | | id | dce816c8-fffd-4945-b0bf-ca0086971ce1 | 2025-06-02 18:14:18.275123 | orchestrator | | image | Cirros 0.6.2 (f43575b4-74fe-496e-8665-10c045e0ab73) | 2025-06-02 18:14:18.275134 | orchestrator | | key_name | test | 2025-06-02 18:14:18.275145 | orchestrator | | locked | False | 2025-06-02 18:14:18.275162 | orchestrator | | locked_reason | None | 2025-06-02 18:14:18.275173 | orchestrator | | name | test | 2025-06-02 18:14:18.275191 | orchestrator | | pinned_availability_zone | None | 2025-06-02 18:14:18.275202 | orchestrator | | progress | 0 | 2025-06-02 18:14:18.275214 | orchestrator | | project_id | 1e6166f15d2b446f9024ebe2b47af594 | 2025-06-02 18:14:18.275225 | orchestrator | | properties | hostname='test' | 2025-06-02 18:14:18.275239 | orchestrator | | security_groups | name='icmp' | 2025-06-02 18:14:18.275258 | orchestrator | | | name='ssh' | 2025-06-02 18:14:18.275275 | orchestrator | | server_groups | None | 2025-06-02 18:14:18.275292 | orchestrator | | status | ACTIVE | 2025-06-02 18:14:18.275310 | orchestrator | | tags | test | 2025-06-02 18:14:18.275338 | orchestrator | | trusted_image_certificates | None | 2025-06-02 18:14:18.275354 | orchestrator | | updated | 2025-06-02T18:12:49Z | 2025-06-02 18:14:18.275379 | orchestrator | | user_id | f7235a919d5a41e389d607bd1458c000 | 2025-06-02 18:14:18.275406 | orchestrator | | volumes_attached | delete_on_termination='False', id='ef716b36-61b3-4e0d-b0cb-9e03951ab40a' | 2025-06-02 18:14:18.276261 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 18:14:18.584404 | orchestrator | + openstack --os-cloud test server show test-1 2025-06-02 18:14:21.870909 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 18:14:21.871043 | orchestrator | | Field | Value | 2025-06-02 18:14:21.871061 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 18:14:21.871073 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-02 18:14:21.871084 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-02 18:14:21.871117 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-02 18:14:21.871134 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2025-06-02 18:14:21.871186 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-02 18:14:21.871208 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-02 18:14:21.871222 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-02 18:14:21.871234 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-02 18:14:21.871264 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-02 18:14:21.871283 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-02 18:14:21.871294 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-02 18:14:21.871305 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-02 18:14:21.871317 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-02 18:14:21.871337 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-02 18:14:21.871349 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-02 18:14:21.871360 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-02T18:10:34.000000 | 2025-06-02 18:14:21.871371 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-02 18:14:21.871382 | orchestrator | | accessIPv4 | | 2025-06-02 18:14:21.871393 | orchestrator | | accessIPv6 | | 2025-06-02 18:14:21.871404 | orchestrator | | addresses | auto_allocated_network=10.42.0.62, 192.168.112.200 | 2025-06-02 18:14:21.871422 | orchestrator | | config_drive | | 2025-06-02 18:14:21.871438 | orchestrator | | created | 2025-06-02T18:10:13Z | 2025-06-02 18:14:21.871450 | orchestrator | | description | None | 2025-06-02 18:14:21.871461 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-02 18:14:21.871479 | orchestrator | | hostId | 71f2fb5225d2018c47be46a639c5946a6344bfc43ab8cbd949f415b7 | 2025-06-02 18:14:21.871490 | orchestrator | | host_status | None | 2025-06-02 18:14:21.871501 | orchestrator | | id | d5227dd5-792a-4202-824f-cd9d4a6d4f75 | 2025-06-02 18:14:21.871512 | orchestrator | | image | Cirros 0.6.2 (f43575b4-74fe-496e-8665-10c045e0ab73) | 2025-06-02 18:14:21.871523 | orchestrator | | key_name | test | 2025-06-02 18:14:21.871534 | orchestrator | | locked | False | 2025-06-02 18:14:21.871545 | orchestrator | | locked_reason | None | 2025-06-02 18:14:21.871557 | orchestrator | | name | test-1 | 2025-06-02 18:14:21.871574 | orchestrator | | pinned_availability_zone | None | 2025-06-02 18:14:21.871590 | orchestrator | | progress | 0 | 2025-06-02 18:14:21.871601 | orchestrator | | project_id | 1e6166f15d2b446f9024ebe2b47af594 | 2025-06-02 18:14:21.871624 | orchestrator | | properties | hostname='test-1' | 2025-06-02 18:14:21.871636 | orchestrator | | security_groups | name='icmp' | 2025-06-02 18:14:21.871673 | orchestrator | | | name='ssh' | 2025-06-02 18:14:21.871684 | orchestrator | | server_groups | None | 2025-06-02 18:14:21.871695 | orchestrator | | status | ACTIVE | 2025-06-02 18:14:21.871706 | orchestrator | | tags | test | 2025-06-02 18:14:21.871717 | orchestrator | | trusted_image_certificates | None | 2025-06-02 18:14:21.871729 | orchestrator | | updated | 2025-06-02T18:12:54Z | 2025-06-02 18:14:21.871746 | orchestrator | | user_id | f7235a919d5a41e389d607bd1458c000 | 2025-06-02 18:14:21.871763 | orchestrator | | volumes_attached | | 2025-06-02 18:14:21.875921 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 18:14:22.239154 | orchestrator | + openstack --os-cloud test server show test-2 2025-06-02 18:14:25.494690 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 18:14:25.494771 | orchestrator | | Field | Value | 2025-06-02 18:14:25.494778 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 18:14:25.494784 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-02 18:14:25.494788 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-02 18:14:25.494793 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-02 18:14:25.494798 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2025-06-02 18:14:25.494802 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-02 18:14:25.494807 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-02 18:14:25.494812 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-02 18:14:25.494835 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-02 18:14:25.494851 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-02 18:14:25.494857 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-02 18:14:25.494861 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-02 18:14:25.494866 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-02 18:14:25.494884 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-02 18:14:25.494889 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-02 18:14:25.494893 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-02 18:14:25.494898 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-02T18:11:15.000000 | 2025-06-02 18:14:25.494903 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-02 18:14:25.494908 | orchestrator | | accessIPv4 | | 2025-06-02 18:14:25.494920 | orchestrator | | accessIPv6 | | 2025-06-02 18:14:25.494925 | orchestrator | | addresses | auto_allocated_network=10.42.0.39, 192.168.112.132 | 2025-06-02 18:14:25.494932 | orchestrator | | config_drive | | 2025-06-02 18:14:25.494937 | orchestrator | | created | 2025-06-02T18:10:53Z | 2025-06-02 18:14:25.494942 | orchestrator | | description | None | 2025-06-02 18:14:25.494946 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-02 18:14:25.494951 | orchestrator | | hostId | eff250156ffcb0dfafd5e5e004514b6aea7f9d4eb03c2402a4875a9a | 2025-06-02 18:14:25.494956 | orchestrator | | host_status | None | 2025-06-02 18:14:25.494960 | orchestrator | | id | c9ca9955-e5be-4fa9-9a33-153d73522b2e | 2025-06-02 18:14:25.494965 | orchestrator | | image | Cirros 0.6.2 (f43575b4-74fe-496e-8665-10c045e0ab73) | 2025-06-02 18:14:25.494978 | orchestrator | | key_name | test | 2025-06-02 18:14:25.494983 | orchestrator | | locked | False | 2025-06-02 18:14:25.494991 | orchestrator | | locked_reason | None | 2025-06-02 18:14:25.494996 | orchestrator | | name | test-2 | 2025-06-02 18:14:25.495004 | orchestrator | | pinned_availability_zone | None | 2025-06-02 18:14:25.495008 | orchestrator | | progress | 0 | 2025-06-02 18:14:25.495013 | orchestrator | | project_id | 1e6166f15d2b446f9024ebe2b47af594 | 2025-06-02 18:14:25.495018 | orchestrator | | properties | hostname='test-2' | 2025-06-02 18:14:25.495023 | orchestrator | | security_groups | name='icmp' | 2025-06-02 18:14:25.495027 | orchestrator | | | name='ssh' | 2025-06-02 18:14:25.495032 | orchestrator | | server_groups | None | 2025-06-02 18:14:25.495041 | orchestrator | | status | ACTIVE | 2025-06-02 18:14:25.495045 | orchestrator | | tags | test | 2025-06-02 18:14:25.495050 | orchestrator | | trusted_image_certificates | None | 2025-06-02 18:14:25.495057 | orchestrator | | updated | 2025-06-02T18:12:59Z | 2025-06-02 18:14:25.495065 | orchestrator | | user_id | f7235a919d5a41e389d607bd1458c000 | 2025-06-02 18:14:25.495069 | orchestrator | | volumes_attached | | 2025-06-02 18:14:25.499612 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 18:14:25.795929 | orchestrator | + openstack --os-cloud test server show test-3 2025-06-02 18:14:29.120046 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 18:14:29.120132 | orchestrator | | Field | Value | 2025-06-02 18:14:29.120146 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 18:14:29.120157 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-02 18:14:29.120194 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-02 18:14:29.120208 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-02 18:14:29.120218 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2025-06-02 18:14:29.120242 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-02 18:14:29.120254 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-02 18:14:29.120265 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-02 18:14:29.120275 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-02 18:14:29.120305 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-02 18:14:29.120317 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-02 18:14:29.120329 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-02 18:14:29.120335 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-02 18:14:29.120348 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-02 18:14:29.120355 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-02 18:14:29.120361 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-02 18:14:29.120367 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-02T18:11:49.000000 | 2025-06-02 18:14:29.120377 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-02 18:14:29.120384 | orchestrator | | accessIPv4 | | 2025-06-02 18:14:29.120390 | orchestrator | | accessIPv6 | | 2025-06-02 18:14:29.120396 | orchestrator | | addresses | auto_allocated_network=10.42.0.27, 192.168.112.190 | 2025-06-02 18:14:29.120407 | orchestrator | | config_drive | | 2025-06-02 18:14:29.120414 | orchestrator | | created | 2025-06-02T18:11:32Z | 2025-06-02 18:14:29.120424 | orchestrator | | description | None | 2025-06-02 18:14:29.120431 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-02 18:14:29.120437 | orchestrator | | hostId | 71f2fb5225d2018c47be46a639c5946a6344bfc43ab8cbd949f415b7 | 2025-06-02 18:14:29.120443 | orchestrator | | host_status | None | 2025-06-02 18:14:29.120450 | orchestrator | | id | 9730bc6b-0d7e-4cd2-9ae1-2e09da63620f | 2025-06-02 18:14:29.120456 | orchestrator | | image | Cirros 0.6.2 (f43575b4-74fe-496e-8665-10c045e0ab73) | 2025-06-02 18:14:29.120462 | orchestrator | | key_name | test | 2025-06-02 18:14:29.120469 | orchestrator | | locked | False | 2025-06-02 18:14:29.120475 | orchestrator | | locked_reason | None | 2025-06-02 18:14:29.120481 | orchestrator | | name | test-3 | 2025-06-02 18:14:29.120491 | orchestrator | | pinned_availability_zone | None | 2025-06-02 18:14:29.120502 | orchestrator | | progress | 0 | 2025-06-02 18:14:29.120509 | orchestrator | | project_id | 1e6166f15d2b446f9024ebe2b47af594 | 2025-06-02 18:14:29.120515 | orchestrator | | properties | hostname='test-3' | 2025-06-02 18:14:29.120521 | orchestrator | | security_groups | name='icmp' | 2025-06-02 18:14:29.120533 | orchestrator | | | name='ssh' | 2025-06-02 18:14:29.120540 | orchestrator | | server_groups | None | 2025-06-02 18:14:29.120550 | orchestrator | | status | ACTIVE | 2025-06-02 18:14:29.120556 | orchestrator | | tags | test | 2025-06-02 18:14:29.120563 | orchestrator | | trusted_image_certificates | None | 2025-06-02 18:14:29.120569 | orchestrator | | updated | 2025-06-02T18:13:04Z | 2025-06-02 18:14:29.120579 | orchestrator | | user_id | f7235a919d5a41e389d607bd1458c000 | 2025-06-02 18:14:29.120590 | orchestrator | | volumes_attached | | 2025-06-02 18:14:29.124797 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 18:14:29.429524 | orchestrator | + openstack --os-cloud test server show test-4 2025-06-02 18:14:32.588779 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 18:14:32.588918 | orchestrator | | Field | Value | 2025-06-02 18:14:32.588943 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 18:14:32.588963 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-02 18:14:32.588980 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-02 18:14:32.589019 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-02 18:14:32.589041 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2025-06-02 18:14:32.589059 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-02 18:14:32.589106 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-02 18:14:32.589127 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-02 18:14:32.589146 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-02 18:14:32.589190 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-02 18:14:32.589203 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-02 18:14:32.589214 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-02 18:14:32.589225 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-02 18:14:32.589236 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-02 18:14:32.589247 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-02 18:14:32.589264 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-02 18:14:32.589276 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-02T18:12:29.000000 | 2025-06-02 18:14:32.589295 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-02 18:14:32.589306 | orchestrator | | accessIPv4 | | 2025-06-02 18:14:32.589317 | orchestrator | | accessIPv6 | | 2025-06-02 18:14:32.589328 | orchestrator | | addresses | auto_allocated_network=10.42.0.36, 192.168.112.155 | 2025-06-02 18:14:32.589346 | orchestrator | | config_drive | | 2025-06-02 18:14:32.589357 | orchestrator | | created | 2025-06-02T18:12:12Z | 2025-06-02 18:14:32.589368 | orchestrator | | description | None | 2025-06-02 18:14:32.589379 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-02 18:14:32.589390 | orchestrator | | hostId | 49cb9604f9743aafb41450b85ac8dbb2a1771e975fd15417d706910a | 2025-06-02 18:14:32.589406 | orchestrator | | host_status | None | 2025-06-02 18:14:32.589418 | orchestrator | | id | 038a8b3e-2b0d-412d-b513-c73dc382b812 | 2025-06-02 18:14:32.589436 | orchestrator | | image | Cirros 0.6.2 (f43575b4-74fe-496e-8665-10c045e0ab73) | 2025-06-02 18:14:32.589448 | orchestrator | | key_name | test | 2025-06-02 18:14:32.589468 | orchestrator | | locked | False | 2025-06-02 18:14:32.589486 | orchestrator | | locked_reason | None | 2025-06-02 18:14:32.589504 | orchestrator | | name | test-4 | 2025-06-02 18:14:32.589533 | orchestrator | | pinned_availability_zone | None | 2025-06-02 18:14:32.589551 | orchestrator | | progress | 0 | 2025-06-02 18:14:32.589570 | orchestrator | | project_id | 1e6166f15d2b446f9024ebe2b47af594 | 2025-06-02 18:14:32.589589 | orchestrator | | properties | hostname='test-4' | 2025-06-02 18:14:32.589608 | orchestrator | | security_groups | name='icmp' | 2025-06-02 18:14:32.589669 | orchestrator | | | name='ssh' | 2025-06-02 18:14:32.589698 | orchestrator | | server_groups | None | 2025-06-02 18:14:32.589710 | orchestrator | | status | ACTIVE | 2025-06-02 18:14:32.589721 | orchestrator | | tags | test | 2025-06-02 18:14:32.589732 | orchestrator | | trusted_image_certificates | None | 2025-06-02 18:14:32.589743 | orchestrator | | updated | 2025-06-02T18:13:08Z | 2025-06-02 18:14:32.589761 | orchestrator | | user_id | f7235a919d5a41e389d607bd1458c000 | 2025-06-02 18:14:32.589774 | orchestrator | | volumes_attached | | 2025-06-02 18:14:32.593563 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 18:14:32.876758 | orchestrator | + server_ping 2025-06-02 18:14:32.877534 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-06-02 18:14:32.878046 | orchestrator | ++ tr -d '\r' 2025-06-02 18:14:35.841072 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 18:14:35.841180 | orchestrator | + ping -c3 192.168.112.160 2025-06-02 18:14:35.855060 | orchestrator | PING 192.168.112.160 (192.168.112.160) 56(84) bytes of data. 2025-06-02 18:14:35.855166 | orchestrator | 64 bytes from 192.168.112.160: icmp_seq=1 ttl=63 time=6.75 ms 2025-06-02 18:14:36.854222 | orchestrator | 64 bytes from 192.168.112.160: icmp_seq=2 ttl=63 time=3.72 ms 2025-06-02 18:14:37.855571 | orchestrator | 64 bytes from 192.168.112.160: icmp_seq=3 ttl=63 time=2.45 ms 2025-06-02 18:14:37.855725 | orchestrator | 2025-06-02 18:14:37.855751 | orchestrator | --- 192.168.112.160 ping statistics --- 2025-06-02 18:14:37.855757 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-06-02 18:14:37.855761 | orchestrator | rtt min/avg/max/mdev = 2.451/4.309/6.752/1.803 ms 2025-06-02 18:14:37.856188 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 18:14:37.856684 | orchestrator | + ping -c3 192.168.112.190 2025-06-02 18:14:37.868830 | orchestrator | PING 192.168.112.190 (192.168.112.190) 56(84) bytes of data. 2025-06-02 18:14:37.868955 | orchestrator | 64 bytes from 192.168.112.190: icmp_seq=1 ttl=63 time=9.78 ms 2025-06-02 18:14:38.862727 | orchestrator | 64 bytes from 192.168.112.190: icmp_seq=2 ttl=63 time=2.71 ms 2025-06-02 18:14:39.864819 | orchestrator | 64 bytes from 192.168.112.190: icmp_seq=3 ttl=63 time=2.11 ms 2025-06-02 18:14:39.865077 | orchestrator | 2025-06-02 18:14:39.865101 | orchestrator | --- 192.168.112.190 ping statistics --- 2025-06-02 18:14:39.865121 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-02 18:14:39.865139 | orchestrator | rtt min/avg/max/mdev = 2.111/4.866/9.781/3.483 ms 2025-06-02 18:14:39.865173 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 18:14:39.865192 | orchestrator | + ping -c3 192.168.112.132 2025-06-02 18:14:39.882907 | orchestrator | PING 192.168.112.132 (192.168.112.132) 56(84) bytes of data. 2025-06-02 18:14:39.882997 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=1 ttl=63 time=13.2 ms 2025-06-02 18:14:40.873182 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=2 ttl=63 time=1.93 ms 2025-06-02 18:14:41.874391 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=3 ttl=63 time=1.88 ms 2025-06-02 18:14:41.874497 | orchestrator | 2025-06-02 18:14:41.874508 | orchestrator | --- 192.168.112.132 ping statistics --- 2025-06-02 18:14:41.874517 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-06-02 18:14:41.874523 | orchestrator | rtt min/avg/max/mdev = 1.879/5.669/13.200/5.324 ms 2025-06-02 18:14:41.874530 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 18:14:41.874537 | orchestrator | + ping -c3 192.168.112.155 2025-06-02 18:14:41.888829 | orchestrator | PING 192.168.112.155 (192.168.112.155) 56(84) bytes of data. 2025-06-02 18:14:41.888927 | orchestrator | 64 bytes from 192.168.112.155: icmp_seq=1 ttl=63 time=9.44 ms 2025-06-02 18:14:42.882526 | orchestrator | 64 bytes from 192.168.112.155: icmp_seq=2 ttl=63 time=2.34 ms 2025-06-02 18:14:43.885607 | orchestrator | 64 bytes from 192.168.112.155: icmp_seq=3 ttl=63 time=3.11 ms 2025-06-02 18:14:43.886096 | orchestrator | 2025-06-02 18:14:43.886126 | orchestrator | --- 192.168.112.155 ping statistics --- 2025-06-02 18:14:43.886141 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-06-02 18:14:43.886152 | orchestrator | rtt min/avg/max/mdev = 2.339/4.963/9.444/3.183 ms 2025-06-02 18:14:43.886181 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 18:14:43.886194 | orchestrator | + ping -c3 192.168.112.200 2025-06-02 18:14:43.900082 | orchestrator | PING 192.168.112.200 (192.168.112.200) 56(84) bytes of data. 2025-06-02 18:14:43.900188 | orchestrator | 64 bytes from 192.168.112.200: icmp_seq=1 ttl=63 time=9.86 ms 2025-06-02 18:14:44.894408 | orchestrator | 64 bytes from 192.168.112.200: icmp_seq=2 ttl=63 time=2.90 ms 2025-06-02 18:14:45.895351 | orchestrator | 64 bytes from 192.168.112.200: icmp_seq=3 ttl=63 time=2.29 ms 2025-06-02 18:14:45.895462 | orchestrator | 2025-06-02 18:14:45.895478 | orchestrator | --- 192.168.112.200 ping statistics --- 2025-06-02 18:14:45.895491 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-06-02 18:14:45.895502 | orchestrator | rtt min/avg/max/mdev = 2.286/5.016/9.861/3.434 ms 2025-06-02 18:14:45.896167 | orchestrator | + [[ 9.1.0 == \l\a\t\e\s\t ]] 2025-06-02 18:14:46.243349 | orchestrator | ok: Runtime: 0:10:07.815039 2025-06-02 18:14:46.305914 | 2025-06-02 18:14:46.306133 | TASK [Run tempest] 2025-06-02 18:14:46.870570 | orchestrator | skipping: Conditional result was False 2025-06-02 18:14:46.892344 | 2025-06-02 18:14:46.892637 | TASK [Check prometheus alert status] 2025-06-02 18:14:47.452699 | orchestrator | skipping: Conditional result was False 2025-06-02 18:14:47.454821 | 2025-06-02 18:14:47.454952 | PLAY RECAP 2025-06-02 18:14:47.455035 | orchestrator | ok: 24 changed: 11 unreachable: 0 failed: 0 skipped: 5 rescued: 0 ignored: 0 2025-06-02 18:14:47.455070 | 2025-06-02 18:14:47.688988 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-06-02 18:14:47.692723 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-06-02 18:14:48.503272 | 2025-06-02 18:14:48.503509 | PLAY [Post output play] 2025-06-02 18:14:48.524247 | 2025-06-02 18:14:48.524433 | LOOP [stage-output : Register sources] 2025-06-02 18:14:48.595682 | 2025-06-02 18:14:48.596053 | TASK [stage-output : Check sudo] 2025-06-02 18:14:49.469128 | orchestrator | sudo: a password is required 2025-06-02 18:14:49.639905 | orchestrator | ok: Runtime: 0:00:00.010501 2025-06-02 18:14:49.658678 | 2025-06-02 18:14:49.658936 | LOOP [stage-output : Set source and destination for files and folders] 2025-06-02 18:14:49.715094 | 2025-06-02 18:14:49.715405 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-06-02 18:14:49.797074 | orchestrator | ok 2025-06-02 18:14:49.807180 | 2025-06-02 18:14:49.807344 | LOOP [stage-output : Ensure target folders exist] 2025-06-02 18:14:50.265007 | orchestrator | ok: "docs" 2025-06-02 18:14:50.265290 | 2025-06-02 18:14:50.555503 | orchestrator | ok: "artifacts" 2025-06-02 18:14:50.839373 | orchestrator | ok: "logs" 2025-06-02 18:14:50.858330 | 2025-06-02 18:14:50.858525 | LOOP [stage-output : Copy files and folders to staging folder] 2025-06-02 18:14:50.923226 | 2025-06-02 18:14:50.924085 | TASK [stage-output : Make all log files readable] 2025-06-02 18:14:51.282724 | orchestrator | ok 2025-06-02 18:14:51.291483 | 2025-06-02 18:14:51.291634 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-06-02 18:14:51.326773 | orchestrator | skipping: Conditional result was False 2025-06-02 18:14:51.335940 | 2025-06-02 18:14:51.336062 | TASK [stage-output : Discover log files for compression] 2025-06-02 18:14:51.359917 | orchestrator | skipping: Conditional result was False 2025-06-02 18:14:51.371637 | 2025-06-02 18:14:51.371786 | LOOP [stage-output : Archive everything from logs] 2025-06-02 18:14:51.421879 | 2025-06-02 18:14:51.422038 | PLAY [Post cleanup play] 2025-06-02 18:14:51.429929 | 2025-06-02 18:14:51.430097 | TASK [Set cloud fact (Zuul deployment)] 2025-06-02 18:14:51.495690 | orchestrator | ok 2025-06-02 18:14:51.509845 | 2025-06-02 18:14:51.509979 | TASK [Set cloud fact (local deployment)] 2025-06-02 18:14:51.545350 | orchestrator | skipping: Conditional result was False 2025-06-02 18:14:51.559281 | 2025-06-02 18:14:51.559415 | TASK [Clean the cloud environment] 2025-06-02 18:14:52.342685 | orchestrator | 2025-06-02 18:14:52 - clean up servers 2025-06-02 18:14:53.095559 | orchestrator | 2025-06-02 18:14:53 - testbed-manager 2025-06-02 18:14:53.181581 | orchestrator | 2025-06-02 18:14:53 - testbed-node-3 2025-06-02 18:14:53.273349 | orchestrator | 2025-06-02 18:14:53 - testbed-node-1 2025-06-02 18:14:53.376898 | orchestrator | 2025-06-02 18:14:53 - testbed-node-4 2025-06-02 18:14:53.468604 | orchestrator | 2025-06-02 18:14:53 - testbed-node-5 2025-06-02 18:14:53.569278 | orchestrator | 2025-06-02 18:14:53 - testbed-node-0 2025-06-02 18:14:53.668093 | orchestrator | 2025-06-02 18:14:53 - testbed-node-2 2025-06-02 18:14:53.754949 | orchestrator | 2025-06-02 18:14:53 - clean up keypairs 2025-06-02 18:14:53.773768 | orchestrator | 2025-06-02 18:14:53 - testbed 2025-06-02 18:14:53.800224 | orchestrator | 2025-06-02 18:14:53 - wait for servers to be gone 2025-06-02 18:15:06.775682 | orchestrator | 2025-06-02 18:15:06 - clean up ports 2025-06-02 18:15:07.003276 | orchestrator | 2025-06-02 18:15:07 - 15797291-dc64-4afc-8ff4-5a6296a82aff 2025-06-02 18:15:07.275724 | orchestrator | 2025-06-02 18:15:07 - 223cbdff-3c00-4dd7-81ac-7f803c980909 2025-06-02 18:15:07.534388 | orchestrator | 2025-06-02 18:15:07 - 2542e671-1e9b-4381-ba2e-7866f772209b 2025-06-02 18:15:07.797314 | orchestrator | 2025-06-02 18:15:07 - 78d3d585-6d0c-41f4-a168-75cc6ca7ca6e 2025-06-02 18:15:08.213909 | orchestrator | 2025-06-02 18:15:08 - 92c0feff-a6eb-469d-a769-a05ce97eb8c7 2025-06-02 18:15:08.469953 | orchestrator | 2025-06-02 18:15:08 - b17f1179-1fa7-4391-bdc4-2ff0411d2b7a 2025-06-02 18:15:08.759662 | orchestrator | 2025-06-02 18:15:08 - cf0868cb-03de-4e30-9fcd-a0cfa2de26ac 2025-06-02 18:15:08.976074 | orchestrator | 2025-06-02 18:15:08 - clean up volumes 2025-06-02 18:15:09.115093 | orchestrator | 2025-06-02 18:15:09 - testbed-volume-3-node-base 2025-06-02 18:15:09.154073 | orchestrator | 2025-06-02 18:15:09 - testbed-volume-1-node-base 2025-06-02 18:15:09.197052 | orchestrator | 2025-06-02 18:15:09 - testbed-volume-0-node-base 2025-06-02 18:15:09.238939 | orchestrator | 2025-06-02 18:15:09 - testbed-volume-manager-base 2025-06-02 18:15:09.284529 | orchestrator | 2025-06-02 18:15:09 - testbed-volume-2-node-base 2025-06-02 18:15:09.328521 | orchestrator | 2025-06-02 18:15:09 - testbed-volume-5-node-base 2025-06-02 18:15:09.383210 | orchestrator | 2025-06-02 18:15:09 - testbed-volume-4-node-base 2025-06-02 18:15:09.426833 | orchestrator | 2025-06-02 18:15:09 - testbed-volume-8-node-5 2025-06-02 18:15:09.472150 | orchestrator | 2025-06-02 18:15:09 - testbed-volume-3-node-3 2025-06-02 18:15:09.517412 | orchestrator | 2025-06-02 18:15:09 - testbed-volume-1-node-4 2025-06-02 18:15:09.567473 | orchestrator | 2025-06-02 18:15:09 - testbed-volume-5-node-5 2025-06-02 18:15:09.608990 | orchestrator | 2025-06-02 18:15:09 - testbed-volume-6-node-3 2025-06-02 18:15:09.651531 | orchestrator | 2025-06-02 18:15:09 - testbed-volume-4-node-4 2025-06-02 18:15:09.697686 | orchestrator | 2025-06-02 18:15:09 - testbed-volume-2-node-5 2025-06-02 18:15:09.739234 | orchestrator | 2025-06-02 18:15:09 - testbed-volume-0-node-3 2025-06-02 18:15:09.785450 | orchestrator | 2025-06-02 18:15:09 - testbed-volume-7-node-4 2025-06-02 18:15:09.828414 | orchestrator | 2025-06-02 18:15:09 - disconnect routers 2025-06-02 18:15:09.904125 | orchestrator | 2025-06-02 18:15:09 - testbed 2025-06-02 18:15:10.971567 | orchestrator | 2025-06-02 18:15:10 - clean up subnets 2025-06-02 18:15:11.021308 | orchestrator | 2025-06-02 18:15:11 - subnet-testbed-management 2025-06-02 18:15:11.221721 | orchestrator | 2025-06-02 18:15:11 - clean up networks 2025-06-02 18:15:11.383719 | orchestrator | 2025-06-02 18:15:11 - net-testbed-management 2025-06-02 18:15:11.663377 | orchestrator | 2025-06-02 18:15:11 - clean up security groups 2025-06-02 18:15:11.705103 | orchestrator | 2025-06-02 18:15:11 - testbed-node 2025-06-02 18:15:11.820535 | orchestrator | 2025-06-02 18:15:11 - testbed-management 2025-06-02 18:15:11.966981 | orchestrator | 2025-06-02 18:15:11 - clean up floating ips 2025-06-02 18:15:12.003323 | orchestrator | 2025-06-02 18:15:12 - 81.163.192.157 2025-06-02 18:15:12.400053 | orchestrator | 2025-06-02 18:15:12 - clean up routers 2025-06-02 18:15:12.465251 | orchestrator | 2025-06-02 18:15:12 - testbed 2025-06-02 18:15:14.120244 | orchestrator | ok: Runtime: 0:00:21.934180 2025-06-02 18:15:14.124920 | 2025-06-02 18:15:14.125083 | PLAY RECAP 2025-06-02 18:15:14.125216 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-06-02 18:15:14.125277 | 2025-06-02 18:15:14.260762 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-06-02 18:15:14.261884 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-06-02 18:15:15.076450 | 2025-06-02 18:15:15.076664 | PLAY [Cleanup play] 2025-06-02 18:15:15.105354 | 2025-06-02 18:15:15.105528 | TASK [Set cloud fact (Zuul deployment)] 2025-06-02 18:15:15.162231 | orchestrator | ok 2025-06-02 18:15:15.172914 | 2025-06-02 18:15:15.173097 | TASK [Set cloud fact (local deployment)] 2025-06-02 18:15:15.209336 | orchestrator | skipping: Conditional result was False 2025-06-02 18:15:15.226938 | 2025-06-02 18:15:15.227124 | TASK [Clean the cloud environment] 2025-06-02 18:15:16.424824 | orchestrator | 2025-06-02 18:15:16 - clean up servers 2025-06-02 18:15:17.024731 | orchestrator | 2025-06-02 18:15:17 - clean up keypairs 2025-06-02 18:15:17.046466 | orchestrator | 2025-06-02 18:15:17 - wait for servers to be gone 2025-06-02 18:15:17.093795 | orchestrator | 2025-06-02 18:15:17 - clean up ports 2025-06-02 18:15:17.163009 | orchestrator | 2025-06-02 18:15:17 - clean up volumes 2025-06-02 18:15:17.232222 | orchestrator | 2025-06-02 18:15:17 - disconnect routers 2025-06-02 18:15:17.253397 | orchestrator | 2025-06-02 18:15:17 - clean up subnets 2025-06-02 18:15:17.274884 | orchestrator | 2025-06-02 18:15:17 - clean up networks 2025-06-02 18:15:17.457560 | orchestrator | 2025-06-02 18:15:17 - clean up security groups 2025-06-02 18:15:17.493266 | orchestrator | 2025-06-02 18:15:17 - clean up floating ips 2025-06-02 18:15:17.518537 | orchestrator | 2025-06-02 18:15:17 - clean up routers 2025-06-02 18:15:17.767976 | orchestrator | ok: Runtime: 0:00:01.509884 2025-06-02 18:15:17.771901 | 2025-06-02 18:15:17.772063 | PLAY RECAP 2025-06-02 18:15:17.772190 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-06-02 18:15:17.772253 | 2025-06-02 18:15:17.910111 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-06-02 18:15:17.912253 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-06-02 18:15:18.700314 | 2025-06-02 18:15:18.700478 | PLAY [Base post-fetch] 2025-06-02 18:15:18.716704 | 2025-06-02 18:15:18.716841 | TASK [fetch-output : Set log path for multiple nodes] 2025-06-02 18:15:18.772665 | orchestrator | skipping: Conditional result was False 2025-06-02 18:15:18.788795 | 2025-06-02 18:15:18.789030 | TASK [fetch-output : Set log path for single node] 2025-06-02 18:15:18.853207 | orchestrator | ok 2025-06-02 18:15:18.869970 | 2025-06-02 18:15:18.870236 | LOOP [fetch-output : Ensure local output dirs] 2025-06-02 18:15:19.373026 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/dd0960a543a64f20bce8e7355c8ec002/work/logs" 2025-06-02 18:15:19.641487 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/dd0960a543a64f20bce8e7355c8ec002/work/artifacts" 2025-06-02 18:15:19.925714 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/dd0960a543a64f20bce8e7355c8ec002/work/docs" 2025-06-02 18:15:19.958889 | 2025-06-02 18:15:19.959048 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-06-02 18:15:20.987646 | orchestrator | changed: .d..t...... ./ 2025-06-02 18:15:20.988457 | orchestrator | changed: All items complete 2025-06-02 18:15:20.988841 | 2025-06-02 18:15:21.741233 | orchestrator | changed: .d..t...... ./ 2025-06-02 18:15:22.580044 | orchestrator | changed: .d..t...... ./ 2025-06-02 18:15:22.601386 | 2025-06-02 18:15:22.601535 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-06-02 18:15:22.635361 | orchestrator | skipping: Conditional result was False 2025-06-02 18:15:22.641846 | orchestrator | skipping: Conditional result was False 2025-06-02 18:15:22.663428 | 2025-06-02 18:15:22.663547 | PLAY RECAP 2025-06-02 18:15:22.663694 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-06-02 18:15:22.663740 | 2025-06-02 18:15:22.815462 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-06-02 18:15:22.816621 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-06-02 18:15:23.609170 | 2025-06-02 18:15:23.609378 | PLAY [Base post] 2025-06-02 18:15:23.624895 | 2025-06-02 18:15:23.625054 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-06-02 18:15:24.678298 | orchestrator | changed 2025-06-02 18:15:24.687255 | 2025-06-02 18:15:24.687387 | PLAY RECAP 2025-06-02 18:15:24.687461 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-06-02 18:15:24.687539 | 2025-06-02 18:15:24.845045 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-06-02 18:15:24.846724 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-06-02 18:15:25.662389 | 2025-06-02 18:15:25.662565 | PLAY [Base post-logs] 2025-06-02 18:15:25.673600 | 2025-06-02 18:15:25.673750 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-06-02 18:15:26.190370 | localhost | changed 2025-06-02 18:15:26.205404 | 2025-06-02 18:15:26.205616 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-06-02 18:15:26.234371 | localhost | ok 2025-06-02 18:15:26.241308 | 2025-06-02 18:15:26.241490 | TASK [Set zuul-log-path fact] 2025-06-02 18:15:26.259325 | localhost | ok 2025-06-02 18:15:26.273547 | 2025-06-02 18:15:26.273753 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-06-02 18:15:26.300887 | localhost | ok 2025-06-02 18:15:26.306976 | 2025-06-02 18:15:26.307135 | TASK [upload-logs : Create log directories] 2025-06-02 18:15:26.856473 | localhost | changed 2025-06-02 18:15:26.860559 | 2025-06-02 18:15:26.860728 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-06-02 18:15:27.433985 | localhost -> localhost | ok: Runtime: 0:00:00.008701 2025-06-02 18:15:27.443676 | 2025-06-02 18:15:27.443948 | TASK [upload-logs : Upload logs to log server] 2025-06-02 18:15:28.075177 | localhost | Output suppressed because no_log was given 2025-06-02 18:15:28.079721 | 2025-06-02 18:15:28.079941 | LOOP [upload-logs : Compress console log and json output] 2025-06-02 18:15:28.141575 | localhost | skipping: Conditional result was False 2025-06-02 18:15:28.148014 | localhost | skipping: Conditional result was False 2025-06-02 18:15:28.161232 | 2025-06-02 18:15:28.161393 | LOOP [upload-logs : Upload compressed console log and json output] 2025-06-02 18:15:28.212941 | localhost | skipping: Conditional result was False 2025-06-02 18:15:28.213562 | 2025-06-02 18:15:28.216994 | localhost | skipping: Conditional result was False 2025-06-02 18:15:28.223540 | 2025-06-02 18:15:28.223776 | LOOP [upload-logs : Upload console log and json output]